Abdul Jaleel Kavungal

Abdul Jaleel Kavungal

Engineering Leader

Melbourne, Australia

I build the judgement and systems discipline required to make AI agents trustworthy enough for real work. I treat trust as an architectural outcome โ€” not a model feature โ€” designing the human, technical, and organisational structures that allow agents to earn it. I optimise for normal use, not demos; for consistency, not novelty.


Human-Agent Systems Trust Architecture Agent Observability Context Infrastructure Platform Engineering Cloud Architecture DevSecOps Serverless API Governance Sociotechnical Design

Depth across the stack โ€” from architectural foundations to emerging human-agent design.

Trust Architecture Cloud & Architecture Platform Engineering AI Agents DevSecOps Sociotechnical Design

From procedural code to architecting trust in autonomous systems.

2010 PHP Engineer Procedural
2015 Cloud + ML Big Data, R
2017 AWS SA Pro TOGAF 9
2020 Serverless FinOps
2023 RAG & LLMs GenAI
2025 AI Agents HITL / MCP
2026 Trust Architecture Now

Trust is an architectural outcome โ€” not a model feature. Each layer earns the one above.

06 Trust Adoption & reliance
05 Recovery Stop ยท revert ยท escalate
04 Legibility Inspectable state
03 Delegation Explicit boundaries
02 Permissions Scoped capability
01 Context Quality & freshness

Designing the right level of human involvement for each workflow โ€” not the maximum.

HIC
Human in command โ€” every step approved
HITL
Human in the loop โ€” approve key actions
HOTL
Human on the loop โ€” supervise & intervene
HOOTL
Out of the loop โ€” observe outcomes

Daily-use technologies, weighted by depth and frequency.

AWS AI Agents Serverless TypeScript Python MCP Terraform Kubernetes RAG LangGraph Bedrock Claude Code Go Lambda CDK DynamoDB Athena Step Functions EventBridge OpenSearch

Active builds across five domains โ€” each one a case study in trust, delegation, and supervised autonomy.

Human-Agent Systems 04 builds

Where the agent meets the operator. Routing, oversight, scoped delegation, and the scaffolding humans need to stay in command without becoming bottlenecks.

  • neurlAI routing infrastructure with model registry and policy engine
  • aicapitalDecision support with evidence boards and structured frameworks
  • flow-atWorkflow orchestration with conditional logic and approval chains
  • aiviEnterprise AI maturity diagnostics and benchmarking
Supervised Autonomy 03 builds

Real-world systems where a wrong action has weight โ€” physical, financial, environmental. Supervision, intervention, and recovery are first-class concerns.

  • robocarsFleet console for self-driving vehicles โ€” mission control and intervention logging
  • tesl-onEnergy platform with multi-site optimisation and forecasting
  • planetpiGeospatial intelligence โ€” anomaly detection and risk scoring
Trust & Security Posture 02 builds

The substrate underneath everything else. Vulnerability lineage, AI risk surfaces, and the slow, careful work of keeping a system honest as it grows.

  • vimVulnerability Inheritance Map โ€” tracing CVEs across forks and vendored deps
  • owasp-aiOWASP-aligned AI maturity assessment toolkit
Decision Architectures 03 builds

Systems that help humans choose well under uncertainty โ€” game selection, scenario planning, evidence-weighted bets, statistically honest experiments.

  • finiteinfiniteStrategic decision system with portfolio design and scenario planning
  • lessventuresVenture studio operating system โ€” capital allocation and portfolio analytics
  • dynamic-fiScientific platform โ€” experiment design and statistical methodology
Knowledge & Story 02 builds

Tools for the human side of the equation โ€” narrative architecture, brand voice, and the diagnostics that help people perform at their best.

  • playbookfilmsNarrative platform โ€” story builder, brand voice engine, asset library
  • smile-dkHuman performance โ€” energy diagnostics and personalised protocols

Each cluster is a working hypothesis about where trust, autonomy, and good design intersect. See all 98 repositories →

98+
Repositories
6K+
Followers
33
Recommendations
1 Trust is an architectural outcome, not a model feature
2 Optimise for normal use, not demos
3 Design the environment before celebrating the agent
4 Make delegation explicit at every boundary
5 Humans in command without becoming bottlenecks
6 Prefer simplicity over orchestration theatre
7 Measure consistency, not just possibility
8 Turn every insight into reusable doctrine
"I am not trying to become someone who knows a lot about AI. I am becoming someone who knows how to design the human, technical, and organisational structures that allow AI agents to earn trust."

Open to opportunities