Developing AI Agents
Agentic AI is transforming how organizations design intelligent systems that can interpret goals, make decisions, use tools, and complete multi-step tasks.
- Overview
- Prerequisites
- Curriculum
Description:
Agentic AI is transforming how organizations design intelligent systems that can interpret goals, make decisions, use tools, and complete multi-step tasks. Unlike traditional AI interactions that focus on single prompts and responses, AI agents are designed to carry out workflows, access information, and support more dynamic forms of problem-solving and automation.
This session introduces the foundations of developing AI agents, including how they work, the core components that make them effective, and the most common design patterns used in modern agentic systems. Participants will explore how agents use instructions, memory, tools, and feedback loops to perform useful tasks across a variety of technical and business contexts.
The session emphasizes real-world applications through demonstrations and guided examples. The focus is on architecture concepts, practical use cases, and responsible design considerations. Participants will leave with a strong understanding of agentic AI and a clear framework for evaluating and designing AI agents.
Duration:
3 hours
Course Code: BDT 618
Learning Objectives:
After completing this course, participants will be able to:
- Understand what agentic AI is and how it differs from traditional AI interactions
- Identify the core components of an AI agent, including instructions, planning, memory, tools, and feedback
- Recognize common AI agent design patterns and their practical uses
- Understand how AI agents can support technical, operational, and knowledge-based workflows
- Apply prompting and orchestration concepts that improve agent performance
- Evaluate the benefits and limitations of agentic systems in real-world settings
- Identify risks such as hallucinations, insecure tool use, excessive autonomy, and poor task boundaries
General familiarity with AI concepts is recommended. No prior experience building AI agents is required. Suitable for beginning to intermediate technical professionals, including those in software, web, cloud, platform, and related roles.
Course Outline:
Introduction to Agentic AI
- Overview of agentic AI and how it differs from chatbots and prompt-based AI use
- Understanding the shift from single-response AI tools to multi-step intelligent workflows
- Common examples of AI agents in technical and business settings
- How agentic AI is changing the design of modern AI-enabled systems
- Capabilities and limitations of current agentic systems
Core Components of AI Agents
- Instructions, goals, and task definition
- Planning and task decomposition
- Memory and context management
- Tool use, data access, and external system interaction
- Human-in-the-loop decision points
- Outputs, actions, and workflow completion
Common Agent Design Patterns
- Single-agent workflows
- Prompt chaining and multi-step execution
- Tool-calling patterns
- Routing and task-selection approaches
- Review, reflection, and self-check patterns
- When to use an agentic approach versus a simpler AI workflow
Designing Effective AI Agents
- Defining the task boundary and level of autonomy
- Selecting the right inputs, outputs, and success criteria
- Choosing tools and capabilities for the agent
- Handling ambiguity, incomplete information, and failure scenarios
- Designing predictable and useful agent behavior
Prompting and Orchestration Strategies
- Writing effective instructions for AI agents
- Structuring prompts for reasoning, action, and output consistency
- Guiding decision-making and tool selection
- Using constraints and follow-up prompting to improve behavior
- Prompting for validation, checking, and refinement
- Improving reliability through better orchestration design
Real-World Applications of AI Agents
- Knowledge assistants and internal support agents
- Research and analysis workflows
- Documentation and content support
- Workflow coordination and task assistance
Risks and Responsible Use
- Identifying hallucinations and unreliable outputs
- Managing tool access and permission boundaries
- Protecting sensitive and proprietary information
- Avoiding over-reliance on autonomous behavior
- Knowing when human oversight is required
Â
Training material provided: Yes (Digital format)


