We want our software to be easy for people to use. That means building a great UI. Now that people work with AI agents, we need our software to be easy for agents to use. That means building a great MCP. Model Context Protocol servers give AI agents fingers to feel around and act in the world, one tool at a time.
In this session, I’ll share design considerations from the Honeycomb.io MCP. I’ll contrast MCP design with human UX and software APIs, then discuss ways to tell whether the MCP is effective in production.
Format: Technical talk with live demos, code and prompt examples
Description:
The role of the developer is fundamentally changing. We're moving from being executors who write every line of code to becoming orchestrators who conduct AI agents to build complex systems. This talk, based on my essay about transitioning from traditional coding to AI orchestration, shares practical insights from a year of experimenting with multi-agent development workflows. 🔗 Essay linked here: https://pivotech.substack.com/p/from-executor-to-orchestrator-my
Through real code examples and live demonstrations, I'll walk through my evolution from using ChatGPT for learning CS50 concepts to orchestrating Claude, Gemini CLI, and NotebookLM to build complete products. You'll discover the three distinct schools of AI development I've identified through hands-on experimentation: the One-Shot method, the Incremental approach, and my hybrid Layering technique.
I'll share the workflows I use to go from customer discovery sessions to deployed applications, including the mistakes, frustrations, and breakthroughs that shaped my approach. We'll explore the "Legacy Codebase Problem" that emerges from AI-generated code, the "Hyper-specificity Paradox" of detailed prompting, and the new skill set required to become an effective AI orchestrator.
Key Takeaways:
Three proven patterns for AI-assisted development and when to use each
Practical orchestration workflows for complex projects
The emerging skillset of the developer-orchestrator
How to maintain technical depth while leveraging AI efficiency
Real-world pitfalls and how to navigate them
Target Audience: Developers looking to evolve their practice in the age of intelligent agents. Minimal level of AI development experience required e.g. prompting Claude
We want our software to be easy for people to use. That means building a great UI. Now that people work with AI agents, we need our software to be easy for agents to use. That means building a great MCP. Model Context Protocol servers give AI agents fingers to feel around and act in the world, one tool at a time.
In this session, I’ll share design considerations from the Honeycomb.io MCP. I’ll contrast MCP design with human UX and software APIs, then discuss ways to tell whether the MCP is effective in production.
Format: Technical talk with live demos, code and prompt examples
Description:
The role of the developer is fundamentally changing. We're moving from being executors who write every line of code to becoming orchestrators who conduct AI agents to build complex systems. This talk, based on my essay about transitioning from traditional coding to AI orchestration, shares practical insights from a year of experimenting with multi-agent development workflows. 🔗 Essay linked here: https://pivotech.substack.com/p/from-executor-to-orchestrator-my
Through real code examples and live demonstrations, I'll walk through my evolution from using ChatGPT for learning CS50 concepts to orchestrating Claude, Gemini CLI, and NotebookLM to build complete products. You'll discover the three distinct schools of AI development I've identified through hands-on experimentation: the One-Shot method, the Incremental approach, and my hybrid Layering technique.
I'll share the workflows I use to go from customer discovery sessions to deployed applications, including the mistakes, frustrations, and breakthroughs that shaped my approach. We'll explore the "Legacy Codebase Problem" that emerges from AI-generated code, the "Hyper-specificity Paradox" of detailed prompting, and the new skill set required to become an effective AI orchestrator.
Key Takeaways:
Three proven patterns for AI-assisted development and when to use each
Practical orchestration workflows for complex projects
The emerging skillset of the developer-orchestrator
How to maintain technical depth while leveraging AI efficiency
Real-world pitfalls and how to navigate them
Target Audience: Developers looking to evolve their practice in the age of intelligent agents. Minimal level of AI development experience required e.g. prompting Claude
We want our software to be easy for people to use. That means building a great UI. Now that people work with AI agents, we need our software to be easy for agents to use. That means building a great MCP. Model Context Protocol servers give AI agents fingers to feel around and act in the world, one tool at a time.
In this session, I’ll share design considerations from the Honeycomb.io MCP. I’ll contrast MCP design with human UX and software APIs, then discuss ways to tell whether the MCP is effective in production.
Format: Technical talk with live demos, code and prompt examples
Description:
The role of the developer is fundamentally changing. We're moving from being executors who write every line of code to becoming orchestrators who conduct AI agents to build complex systems. This talk, based on my essay about transitioning from traditional coding to AI orchestration, shares practical insights from a year of experimenting with multi-agent development workflows. 🔗 Essay linked here: https://pivotech.substack.com/p/from-executor-to-orchestrator-my
Through real code examples and live demonstrations, I'll walk through my evolution from using ChatGPT for learning CS50 concepts to orchestrating Claude, Gemini CLI, and NotebookLM to build complete products. You'll discover the three distinct schools of AI development I've identified through hands-on experimentation: the One-Shot method, the Incremental approach, and my hybrid Layering technique.
I'll share the workflows I use to go from customer discovery sessions to deployed applications, including the mistakes, frustrations, and breakthroughs that shaped my approach. We'll explore the "Legacy Codebase Problem" that emerges from AI-generated code, the "Hyper-specificity Paradox" of detailed prompting, and the new skill set required to become an effective AI orchestrator.
Key Takeaways:
Three proven patterns for AI-assisted development and when to use each
Practical orchestration workflows for complex projects
The emerging skillset of the developer-orchestrator
How to maintain technical depth while leveraging AI efficiency
Real-world pitfalls and how to navigate them
Target Audience: Developers looking to evolve their practice in the age of intelligent agents. Minimal level of AI development experience required e.g. prompting Claude
We want our software to be easy for people to use. That means building a great UI. Now that people work with AI agents, we need our software to be easy for agents to use. That means building a great MCP. Model Context Protocol servers give AI agents fingers to feel around and act in the world, one tool at a time.
In this session, I’ll share design considerations from the Honeycomb.io MCP. I’ll contrast MCP design with human UX and software APIs, then discuss ways to tell whether the MCP is effective in production.
Format: Technical talk with live demos, code and prompt examples
Description:
The role of the developer is fundamentally changing. We're moving from being executors who write every line of code to becoming orchestrators who conduct AI agents to build complex systems. This talk, based on my essay about transitioning from traditional coding to AI orchestration, shares practical insights from a year of experimenting with multi-agent development workflows. 🔗 Essay linked here: https://pivotech.substack.com/p/from-executor-to-orchestrator-my
Through real code examples and live demonstrations, I'll walk through my evolution from using ChatGPT for learning CS50 concepts to orchestrating Claude, Gemini CLI, and NotebookLM to build complete products. You'll discover the three distinct schools of AI development I've identified through hands-on experimentation: the One-Shot method, the Incremental approach, and my hybrid Layering technique.
I'll share the workflows I use to go from customer discovery sessions to deployed applications, including the mistakes, frustrations, and breakthroughs that shaped my approach. We'll explore the "Legacy Codebase Problem" that emerges from AI-generated code, the "Hyper-specificity Paradox" of detailed prompting, and the new skill set required to become an effective AI orchestrator.
Key Takeaways:
Three proven patterns for AI-assisted development and when to use each
Practical orchestration workflows for complex projects
The emerging skillset of the developer-orchestrator
How to maintain technical depth while leveraging AI efficiency
Real-world pitfalls and how to navigate them
Target Audience: Developers looking to evolve their practice in the age of intelligent agents. Minimal level of AI development experience required e.g. prompting Claude
Get in touch!
hi@guild.host