<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd"><channel><title><![CDATA[James Briggs on Odysee]]></title><description><![CDATA[Founder at Aurelio AI, startup advisor, and dev advocate @ Pinecone.<br /><br />NLP + LLM Consulting:<br />https://aurelio.ai<br />]]></description><link>https://odysee.com/@JamesBriggs:0</link><generator>RSS for Node</generator><lastBuildDate>Thu, 30 Apr 2026 01:37:31 GMT</lastBuildDate><atom:link href="https://odysee.com/$/rss/@jamesbriggs:0" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><itunes:author>James Briggs</itunes:author><itunes:category text="Leisure"></itunes:category><itunes:image href="https://thumbnails.lbry.com/UCv83tO5cePwHMt1952IVVHw"/><itunes:owner><itunes:name>James Briggs</itunes:name><itunes:email>no-reply@odysee.com</itunes:email></itunes:owner><itunes:explicit>false</itunes:explicit><item><title><![CDATA[Predictive Query Language (PQL) Explained]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/tEvUfsYqQAM" width="480" alt="thumbnail" title="Predictive Query Language (PQL) Explained" /></p>Overview of Kumo AI's Predictive Query Language (PQL) - a SQL-like syntax for developing state-of-the-art predictive analytics.<br /><br />💡 Subscribe for Latest Courses and Tutorials:<br />https://www.aurelio.ai/subscribe<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br />...<br />https://www.youtube.com/watch?v=tEvUfsYqQAM]]></description><link>https://odysee.com/predictive-query-language-%28pql%29:2a34741d39eea6b285d80b3c7011e664a53187b1</link><guid isPermaLink="true">https://odysee.com/predictive-query-language-%28pql%29:2a34741d39eea6b285d80b3c7011e664a53187b1</guid><pubDate>Thu, 02 Oct 2025 15:39:26 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/predictive-query-language-(pql)/2a34741d39eea6b285d80b3c7011e664a53187b1/bcc0da.mp4" length="4125263" type="video/mp4"/><itunes:title>Predictive Query Language (PQL) Explained</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/tEvUfsYqQAM"/><itunes:duration>59</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[Data Science as a Service | Kumo AI Full Walkthrough]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/ylyktSE8wcw" width="480" alt="thumbnail" title="Data Science as a Service | Kumo AI Full Walkthrough" /></p>Building recommendation systems is hard.  In data science, we can spend months wrangling data, training models, and still end up with mediocre results. That's where Kumo AI comes in — it's a service that abstracts away the complexity of building Graph Neural Networks (GNNs) for predictive analytics.<br /><br />In this guide, we'll build a complete e-commerce recommendation engine using real H&M data with 33 million transactions. By the end, we'll have a system that can:<br /><br />- Predict customer lifetime value for the next 30 days<br />- Generate personalized product recommendations<br />- Forecast purchase behavior to identify active customers<br /><br />All of this can be done in just a couple of hours - not months.<br /><br />📌 Code:<br />https://github.com/aurelio-labs/cookbook/blob/main/recsys/ecommerce/kumo-hm/kumo-hm.ipynb<br /><br />💡 Subscribe for Latest Courses and Tutorials:<br />https://www.aurelio.ai/subscribe<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#datascience #machinelearning #python <br /><br />00:00 Kumo AI and GNNs<br />07:39 Kumo Setup<br />12:17 Kumo Connectors<br />14:45 Getting Data into BigQuery<br />20:39 Building the Graph in Kumo<br />28:34 Predictive Query Language (PQL)<br />35:01 Personalized Product Recommendations<br />38:44 Predicting Purchase Volume<br />41:44 Making Predictions with Kumo<br />27:10 Analysis and Prediction with Kumo<br />52:36 When to use Kumo<br />...<br />https://www.youtube.com/watch?v=ylyktSE8wcw]]></description><link>https://odysee.com/data-science-as-a-service-kumo-ai-full:6ae1c775c811640e216986b2e8f8c8cdc18baa22</link><guid isPermaLink="true">https://odysee.com/data-science-as-a-service-kumo-ai-full:6ae1c775c811640e216986b2e8f8c8cdc18baa22</guid><pubDate>Thu, 02 Oct 2025 14:01:36 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/data-science-as-a-service-kumo-ai-full/6ae1c775c811640e216986b2e8f8c8cdc18baa22/e4c723.mp4" length="539537314" type="video/mp4"/><itunes:title>Data Science as a Service | Kumo AI Full Walkthrough</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/ylyktSE8wcw"/><itunes:duration>3324</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[Agents are coming for Ecom]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/XfZbmic6wn8" width="480" alt="thumbnail" title="Agents are coming for Ecom" /></p>Ecom was always at the forefront of ML and data science in the past, but I felt like the latest AI and agentic wave had left ecom behind — but after building with KumoRFM I'm convinced ecommerce + GNNs + agents is going to be big — wdyt?<br /><br />Full video here: https://youtu.be/MFp9vjr6rgA<br />KumoRFM: https://bit.ly/47x3WSk<br /><br />💡 Subscribe for Latest Courses and Tutorials:<br />https://www.aurelio.ai/subscribe<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br />...<br />https://www.youtube.com/watch?v=XfZbmic6wn8]]></description><link>https://odysee.com/agents-are-coming-for-ecom:ec83efe9790c27e11aa15d833893cd4927c5b962</link><guid isPermaLink="true">https://odysee.com/agents-are-coming-for-ecom:ec83efe9790c27e11aa15d833893cd4927c5b962</guid><pubDate>Tue, 23 Sep 2025 14:09:04 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/agents-are-coming-for-ecom/ec83efe9790c27e11aa15d833893cd4927c5b962/4e793e.mp4" length="4821045" type="video/mp4"/><itunes:title>Agents are coming for Ecom</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/XfZbmic6wn8"/><itunes:duration>38</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[Build Agentic Ecommerce with KumoRFM]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/MFp9vjr6rgA" width="480" alt="thumbnail" title="Build Agentic Ecommerce with KumoRFM" /></p>In this video we explore the use of agents and LLMs for ecommerce and develop our own advanced agent enabling advanced data science and analytics for ecommerce.<br /><br />We use Kumo AI's Relational Foundation Model (RFM) to produce insanely high quality predictions super fast, enabling a conversational experience with what is essentially an expert data science agent.<br /><br />📌 Notebook Code: https://github.com/aurelio-labs/cookbook/tree/main/gen-ai/agents/ecommerce-agent<br />📍 AI App Repo: https://github.com/jamescalam/ecommerce-agent<br /><br />💡 KumoRFM: https://bit.ly/47x3WSk<br />📊 Kumo AI: https://bit.ly/4gduL04<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#datascience #machinelearning #python <br /><br />00:00 Ecommerce and AI<br />01:46 Agents for Ecommerce<br />02:34 KumoRFM<br />03:24 Talking with a KumoRFM Agent<br />10:58 Using KumoRFM<br />16:51 Making Predictions with KumoRFM<br />19:02 Agentic Predictions<br />22:34 Query Dataframes Tool<br />25:23 Query KumoRFM Tool<br />26:50 Building the Agent Graph<br />36:12 Testing our Ecommerce Agent<br />46:14 Ecommerce Agent App Setup<br />48:00 Ecommerce and Agents<br />...<br />https://www.youtube.com/watch?v=MFp9vjr6rgA]]></description><link>https://odysee.com/build-agentic-ecommerce-with-kumorfm:1a79292a362b0d64f2eb0edcffb2bbb29d462087</link><guid isPermaLink="true">https://odysee.com/build-agentic-ecommerce-with-kumorfm:1a79292a362b0d64f2eb0edcffb2bbb29d462087</guid><pubDate>Wed, 17 Sep 2025 15:00:41 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/build-agentic-ecommerce-with-kumorfm/1a79292a362b0d64f2eb0edcffb2bbb29d462087/91d8f0.mp4" length="402678703" type="video/mp4"/><itunes:title>Build Agentic Ecommerce with KumoRFM</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/MFp9vjr6rgA"/><itunes:duration>2974</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[OpenAI's Agents SDK | Tools and Agents-as-Tools Explained]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/_RTxDOnLfVM" width="480" alt="thumbnail" title="OpenAI's Agents SDK | Tools and Agents-as-Tools Explained" /></p>Agents SDK provides various approaches for tool use, including pre-built tools and all the features we need to develop custom tools. In this chapter, we'll learn everything there is about tools, tool-use, and agents-as-tools.<br /><br />📌 Article and Code:<br />https://www.aurelio.ai/learn/agents-sdk-tools<br /><br />💡 Subscribe for Latest Courses and Tutorials:<br />https://www.aurelio.ai/subscribe<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#artificialintellegence #openai #ai #aiagents <br /><br />00:00 Agents SDK Tools<br />00:59 Using Prebuilt Tools<br />02:30 Custom Tools<br />09:36 FunctionTool Object<br />13:43 Agents as Tools<br />...<br />https://www.youtube.com/watch?v=_RTxDOnLfVM]]></description><link>https://odysee.com/openai%27s-agents-sdk-tools-and-agents-as:b5b6aa3973ccb2dd2928c4bb9dbcc3dd2f2435bd</link><guid isPermaLink="true">https://odysee.com/openai%27s-agents-sdk-tools-and-agents-as:b5b6aa3973ccb2dd2928c4bb9dbcc3dd2f2435bd</guid><pubDate>Fri, 08 Aug 2025 11:30:38 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/openai&apos;s-agents-sdk-tools-and-agents-as/b5b6aa3973ccb2dd2928c4bb9dbcc3dd2f2435bd/c4b9e3.mp4" length="152866898" type="video/mp4"/><itunes:title>OpenAI&apos;s Agents SDK | Tools and Agents-as-Tools Explained</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/_RTxDOnLfVM"/><itunes:duration>1100</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[AI Observability with OpenAI Agents SDK]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/T2ytN20H-BQ" width="480" alt="thumbnail" title="AI Observability with OpenAI Agents SDK" /></p>Agents SDK integrates with OpenAI's built-in Traces dashboard found within the OpenAI Platform for out-of-the-box observability and telemetry of our AI agents.<br /><br />In this video, we'll take a look at both the default tracing enabled by default whenever we use Agents SDK (with an OpenAI API key present), and custom tracing.<br /><br />📖 Full Course:<br />https://www.aurelio.ai/course/agents-sdk<br />📌 Code:<br />https://github.com/aurelio-labs/agents-sdk-course/blob/main/chapters/02-tracing.ipynb<br /><br />💡 Subscribe for Latest Courses and Tutorials:<br />https://www.aurelio.ai/subscribe<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#ai #aiagents #programming <br /><br />00:00 Agents SDK Tracing<br />00:17 OpenAI Traces Dashboard<br />02:03 Agents SDK Tracing Setup<br />03:00 Access to OpenAI Traces<br />04:03 Creating Agents SDK Traces<br />05:29 Custom Traces<br />08:24 Tracing Agent Tools<br />12:15 Conclusion<br />...<br />https://www.youtube.com/watch?v=T2ytN20H-BQ]]></description><link>https://odysee.com/ai-observability-with-openai-agents-sdk:65db2ed0324045abc0214d7e58bff265421e4d00</link><guid isPermaLink="true">https://odysee.com/ai-observability-with-openai-agents-sdk:65db2ed0324045abc0214d7e58bff265421e4d00</guid><pubDate>Thu, 31 Jul 2025 13:00:58 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/ai-observability-with-openai-agents-sdk/65db2ed0324045abc0214d7e58bff265421e4d00/7ae577.mp4" length="96265983" type="video/mp4"/><itunes:title>AI Observability with OpenAI Agents SDK</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/T2ytN20H-BQ"/><itunes:duration>795</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[You're doing Agentic chat history wrong | OpenAI Agents SDK]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/9nwWJWyxSyk" width="480" alt="thumbnail" title="You're doing Agentic chat history wrong | OpenAI Agents SDK" /></p>Prompting is an essential component when working with LLMs, and Agents SDK naturally has its own way of handling various components of prompts. In this chapter, we'll examine how to utilise static and dynamic prompting, as well as how to correctly use system, user, assistant, and tool prompts to build event-based conversations, not interaction-based conversations. Then, we'll see how these come together to create advanced conversational agents that use chat history the right way.<br /><br />📌 Code: https://github.com/aurelio-labs/agents-sdk-course/blob/main/chapters/01-prompting.ipynb<br />📖 Article: https://www.aurelio.ai/learn/agents-sdk-prompting<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#aiagents #openai #ai #coding #artificialintellegence #programming <br /><br />00:00 OpenAI Agents SDK<br />00:56 Agents SDK Setup<br />01:56 Static Instructions<br />06:03 Dynamic Prompts<br />08:38 Rethinking Agentic Chat History<br />11:09 Message Types<br />20:25 How to Use SDK Message Types<br />22:09 Developer Messages<br />24:31 Assistant Messages<br />26:37 Chat History<br />27:53 Function Calls<br />31:03 Conclusion for Agents SDK Prompting<br />...<br />https://www.youtube.com/watch?v=9nwWJWyxSyk]]></description><link>https://odysee.com/you%27re-doing-agentic-chat-history-2:373d560b6310394ba68ce18bf4235a7fedfd7a85</link><guid isPermaLink="true">https://odysee.com/you%27re-doing-agentic-chat-history-2:373d560b6310394ba68ce18bf4235a7fedfd7a85</guid><pubDate>Thu, 24 Jul 2025 17:44:13 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/you&apos;re-doing-agentic-chat-history-2/373d560b6310394ba68ce18bf4235a7fedfd7a85/b3d010.mp4" length="265776295" type="video/mp4"/><itunes:title>You&apos;re doing Agentic chat history wrong | OpenAI Agents SDK</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/9nwWJWyxSyk"/><itunes:duration>1976</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[You're doing AGENTIC "chat history" wrong]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/fj9dF_HXL-U" width="480" alt="thumbnail" title="You're doing AGENTIC "chat history" wrong" /></p>💡 Subscribe for Latest Courses and Tutorials:<br />https://www.aurelio.ai/subscribe<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#ai #programming #aiagents<br />...<br />https://www.youtube.com/watch?v=fj9dF_HXL-U]]></description><link>https://odysee.com/you%27re-doing-agentic-chat-history-wrong:0dc722e16e82ef0c27f1cc1675ee8b2ff219a913</link><guid isPermaLink="true">https://odysee.com/you%27re-doing-agentic-chat-history-wrong:0dc722e16e82ef0c27f1cc1675ee8b2ff219a913</guid><pubDate>Tue, 22 Jul 2025 14:01:09 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/you&apos;re-doing-agentic-chat-history-wrong/0dc722e16e82ef0c27f1cc1675ee8b2ff219a913/3dcc85.mp4" length="4422108" type="video/mp4"/><itunes:title>You&apos;re doing AGENTIC &quot;chat history&quot; wrong</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/fj9dF_HXL-U"/><itunes:duration>58</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[End-to-end AI Agent Project with LangChain | Full Walkthrough]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/AO6WbXTeDow" width="480" alt="thumbnail" title="End-to-end AI Agent Project with LangChain | Full Walkthrough" /></p>In this video, we'll develop a full AI Agent application using Python, LangChain, FastAPI, AI agents, tools, streaming, and more.<br /><br />🔗 Full Course: https://www.aurelio.ai/course/langchain<br />📌 Code: https://github.com/aurelio-labs/langchain-course/tree/main/chapters/09-capstone<br /><br />💡 Subscribe for Latest Courses and Tutorials:<br />https://www.aurelio.ai/subscribe<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#ai #langchain #programming #aiagents <br /><br />00:00 End-to-End LangChain Agent<br />01:39 Setup the AI App<br />04:37 API Setup<br />11:28 API Token Generator<br />15:57 Agent Executor in API<br />34:03 Async SerpAPI Tool<br />40:08 Running the App<br />11:44 Course Completion<br />...<br />https://www.youtube.com/watch?v=AO6WbXTeDow]]></description><link>https://odysee.com/end-to-end-ai-agent-project-with:703ee905b4da1bdfac7936aea287f46c239a8203</link><guid isPermaLink="true">https://odysee.com/end-to-end-ai-agent-project-with:703ee905b4da1bdfac7936aea287f46c239a8203</guid><pubDate>Tue, 15 Jul 2025 12:30:18 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/end-to-end-ai-agent-project-with/703ee905b4da1bdfac7936aea287f46c239a8203/1ed2c6.mp4" length="352243712" type="video/mp4"/><itunes:title>End-to-end AI Agent Project with LangChain | Full Walkthrough</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/AO6WbXTeDow"/><itunes:duration>2755</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[LangChain Streaming and API Integration]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/eor3eU9eReA" width="480" alt="thumbnail" title="LangChain Streaming and API Integration" /></p>Streaming is a common pattern in AI applications. We've all seen AI interfaces where answers from AI chatbots appear on the screen as a word-by-word stream of information.<br /><br />This word-by-word stream can look nice but provides many more benefits. Streamed text feels more natural to the user, which means the user can begin reading a response sooner.<br /><br />The Time-to-First-Token of models like gpt-4.1-mini is very low (just 1-2 seconds in many cases). However, the full generation time (or Time-to-Last-Token, TTLT) can vary significantly. When generating long responses from gpt-4.1-mini, a TTLT of 10-20 seconds is typical.<br /><br />A significant difference exists in having users wait 1-2 seconds vs 10-20 seconds. But beyond this, streaming also allows us to send intermediate steps to our interfaces. Suppose an agent uses various tools and/or takes multiple steps to generate a final response. In that case, we can use streaming to send this information to our application, allowing us to render UI components that inform the user about the agent's actions.<br /><br />Using these intermediate step components, we provide continual feedback to the user, preventing them from being stuck staring at a blank screen. These components also provide us with an interface to provide more information to the user, such as research sources or results from intermediate calculations.<br /><br />In this chapter, we will introduce LangChain's async streaming. Async streaming is an essential feature for APIs wanting to support real-time information streaming and enable the enhanced user experience described above.<br /><br />🔗 Full Course: https://www.aurelio.ai/course/langchain<br />📌 Article and Code: https://www.aurelio.ai/learn/langchain-streaming<br /><br />💡 Subscribe for Latest Courses and Tutorials:<br />https://www.aurelio.ai/subscribe<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#ai #langchain #programming #aiagents <br /><br />00:00 LangChain Streaming<br />00:54 Streaming for AI<br />06:08 Basic LangChain Streaming<br />10:16 Streaming with Agents<br />28:13 Custom Agent and Streaming<br />31:13 Streaming to an API<br />...<br />https://www.youtube.com/watch?v=eor3eU9eReA]]></description><link>https://odysee.com/langchain-streaming-and-api-integration:b0bffa79e399046d1cf08f803be75874b72a6eb6</link><guid isPermaLink="true">https://odysee.com/langchain-streaming-and-api-integration:b0bffa79e399046d1cf08f803be75874b72a6eb6</guid><pubDate>Fri, 11 Jul 2025 12:30:53 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/langchain-streaming-and-api-integration/b0bffa79e399046d1cf08f803be75874b72a6eb6/b3d96d.mp4" length="376181608" type="video/mp4"/><itunes:title>LangChain Streaming and API Integration</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/eor3eU9eReA"/><itunes:duration>2268</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[LangChain Expression Language (LCEL)]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/v9G-h6Ygokk" width="480" alt="thumbnail" title="LangChain Expression Language (LCEL)" /></p>This chapter will introduce LangChain's Expression Langauge (LCEL). We'll focus on understanding how LCEL works under the hood and how it is implemented with OpenAI's LLMs.<br /><br />We'll compare LCEL against the traditional methods. We will build a pipeline where the user inputs a specific topic, and then the LLM looks for and returns a report on the specified topic. This generates a research report for the user.<br /><br />📌 Article and Code:<br />https://www.aurelio.ai/learn/langchain-lcel<br /><br />💡 Subscribe for Latest Courses and Tutorials:<br />https://www.aurelio.ai/subscribe<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#ai #langchain #programming #aiagents <br /><br />00:00 LangChain Expression Language<br />00:58 Traditional Chains in LangChain<br />03:02 LangChain LCEL<br />03:55 LCEL Pipe Operator<br />09:09 LangChain RunnableLambda<br />12:41 LCEL Runnable Parallel and Passthrough<br />...<br />https://www.youtube.com/watch?v=v9G-h6Ygokk]]></description><link>https://odysee.com/langchain-expression-language-%28lcel%29-2:5b0a7d973b83dc47133d446b663cc58085a4dee1</link><guid isPermaLink="true">https://odysee.com/langchain-expression-language-%28lcel%29-2:5b0a7d973b83dc47133d446b663cc58085a4dee1</guid><pubDate>Fri, 04 Jul 2025 14:04:12 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/langchain-expression-language-(lcel)-2/5b0a7d973b83dc47133d446b663cc58085a4dee1/3851a8.mp4" length="173793709" type="video/mp4"/><itunes:title>LangChain Expression Language (LCEL)</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/v9G-h6Ygokk"/><itunes:duration>1090</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[LangChain Agent Executor Deep Dive | Walkthrough for 2025]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/iN1Xx8ca_8I" width="480" alt="thumbnail" title="LangChain Agent Executor Deep Dive | Walkthrough for 2025" /></p>In this video, we will continue from the introduction to agents and dive deeper into agents. Learning how to build our custom agent execution loop for v0.3 of LangChain.<br /><br />When we talk about agents, a significant part of an "agent" is simple code logic, iteratively rerunning LLM calls and processing their output. The exact logic varies significantly, but one well-known example is the ReAct agent.<br /><br />Reason + Action (ReAct) agents use iterative reasoning and action steps to incorporate chain-of-thought and tool-use into their execution. During the reasoning step, the LLM generates the steps to take to answer the query. Next, the LLM generates the action input, which our code logic parses into a tool call.<br /><br />Following our action step, we get an observation from the tool call. Then, we feed the observation back into the agent executor logic for a final answer or further reasoning and action steps.<br /><br />The agent and agent executor we will be building will follow this pattern.<br /><br />📌 Article and code:<br />https://www.aurelio.ai/learn/langchain-agent-executor<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#ai #coding #aiagents #langchain <br /><br />00:00 LangChain v0.3 Agent Executor<br />09:26 Creating an Agent with LCEL<br />13:53 Executing Tool Calls<br />16:58 Agentic Final Answers<br />25:58 Building a Custom Agent Executor<br />32:47 Executing Multiple Tool Calls<br />...<br />https://www.youtube.com/watch?v=iN1Xx8ca_8I]]></description><link>https://odysee.com/langchain-agent-executor-deep-dive:ddf53ccf810e5f3d9595077eb893364f8105549a</link><guid isPermaLink="true">https://odysee.com/langchain-agent-executor-deep-dive:ddf53ccf810e5f3d9595077eb893364f8105549a</guid><pubDate>Thu, 26 Jun 2025 12:01:15 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/langchain-agent-executor-deep-dive/ddf53ccf810e5f3d9595077eb893364f8105549a/daab80.mp4" length="274711835" type="video/mp4"/><itunes:title>LangChain Agent Executor Deep Dive | Walkthrough for 2025</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/iN1Xx8ca_8I"/><itunes:duration>2093</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[LangChain Agents in 2025 | Full Tutorial for v0.3]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/Gi7nqB37WEY" width="480" alt="thumbnail" title="LangChain Agents in 2025 | Full Tutorial for v0.3" /></p>In this chapter, we will introduce LangChain's Agents, adding the ability to use tools such as search and calculators to complete tasks that normal LLMs cannot fulfil. We will be using OpenAI's gpt-4o-mini.<br /><br />📌 Article and code:<br />https://www.aurelio.ai/learn/langchain-agents-intro<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#ai #coding #aiagents #langchain <br /><br />00:00 LangChain Agents 101<br />01:27 Introduction to Tools<br />07:05 Creating an Agent<br />11:27 Agent Executor<br />18:02 Web Search Agent<br />...<br />https://www.youtube.com/watch?v=Gi7nqB37WEY]]></description><link>https://odysee.com/langchain-agents-in-2025-full-tutorial:5f1723159c9db9f6760db849ffdb4669f91f5379</link><guid isPermaLink="true">https://odysee.com/langchain-agents-in-2025-full-tutorial:5f1723159c9db9f6760db849ffdb4669f91f5379</guid><pubDate>Tue, 24 Jun 2025 12:00:34 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/langchain-agents-in-2025-full-tutorial/5f1723159c9db9f6760db849ffdb4669f91f5379/8c84a2.mp4" length="272533288" type="video/mp4"/><itunes:title>LangChain Agents in 2025 | Full Tutorial for v0.3</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/Gi7nqB37WEY"/><itunes:duration>1288</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[Conversational Memory in LangChain for 2025]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/EtldFS3JbGs" width="480" alt="thumbnail" title="Conversational Memory in LangChain for 2025" /></p>Conversational memory allows our chatbots and agents to remember previous interactions within a conversation. Without conversational memory, our chatbots would only ever be able to respond to the last message they received, essentially forgetting all previous messages with each new message.<br /><br />Naturally, conversations require our chatbots to be able to respond over multiple interactions and refer to previous messages to understand the context of the conversation.<br /><br />LangChain versions 0.0.x consisted of various conversational memory types. Most of these are due to deprecation, but they still hold value in understanding the different approaches to building conversational memory.<br /><br />Throughout the video, we will refer to these older memory types and then rewrite them for LangChain v0.3 (the latest version in 2025) using the recommended RunnableWithMessageHistory class. We will learn about:<br /><br />- ConversationBufferMemory<br />- ConversationBufferWindowMemory<br />- ConversationSummaryMemory<br />- ConversationSummaryBufferMemory<br /><br />We'll work through each of these memory types in turn and rewrite each one using the RunnableWithMessageHistory class.<br /><br />📌 Article and code:<br />https://www.aurelio.ai/learn/langchain-conversational-memory<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#ai #coding #aiagents #langchain <br /><br />00:00 Conversational Memory in LangChain<br />01:12 LangChain Chat Memory Types<br />04:26 LangChain ConversationBufferMemory<br />08:23 Buffer Memory with LCEL<br />13:14 LangChain ConversationBufferWindowMemory<br />16:01 Buffer Window Memory with LCEL<br />22:32 LangChain ConversationSummaryMemory<br />25:17 Summary Memory with LCEL<br />30:12 Token Usage in LangSmith<br />32:08 Conversation Summary Buffer Memory<br />34:36 Summary Buffer with LCEL<br />...<br />https://www.youtube.com/watch?v=EtldFS3JbGs]]></description><link>https://odysee.com/conversational-memory-in-langchain-for:a22d4ffa1dacbfd2c318b3a49808294fbad883ef</link><guid isPermaLink="true">https://odysee.com/conversational-memory-in-langchain-for:a22d4ffa1dacbfd2c318b3a49808294fbad883ef</guid><pubDate>Thu, 19 Jun 2025 16:47:43 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/conversational-memory-in-langchain-for/a22d4ffa1dacbfd2c318b3a49808294fbad883ef/46e081.mp4" length="496425185" type="video/mp4"/><itunes:title>Conversational Memory in LangChain for 2025</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/EtldFS3JbGs"/><itunes:duration>2660</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[Prompt Templating and Techniques in LangChain]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/jPeOAOvKFHE" width="480" alt="thumbnail" title="Prompt Templating and Techniques in LangChain" /></p>Until 2021, to use an AI model for a specific use case, we would need to fine-tune the model weights themselves. That would require huge training data and significant computate to fine-tune any reasonably performing model.<br /><br />Instruction-fine-tuned large language models (LLMs) changed this fundamental rule of applying AI models to new use cases. Rather than needing to either train a model from scratch or fine-tune an existing model, these new LLMs could adapt incredibly well to a new problem or use case with nothing more than a prompt change.<br /><br />Prompts allow us to completely change the functionality of an AI pipeline. Through natural language, we tell our LLM what it needs to do, and with the right AI pipeline and prompting, it often works.<br /><br />LangChain naturally has many functionalities geared towards helping us build our prompts. We can build dynamic prompting pipelines that modify the structure and content of what we feed into our LLM based on essentially any parameter we would like. In this example, we'll explore the essentials of prompting in LangChain and apply this in a demo Retrieval Augmented Generation (RAG) pipeline.<br /><br />📌 Article and code:<br />https://www.aurelio.ai/learn/langchain-prompts<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#ai #coding #aiagents #langchain <br /><br />00:00 Prompts are Fundamental to LLMs<br />02:13 Building Good LLM Prompts<br />07:13 LangChain Prompts Code Setup<br />11:36 Using our LLM with Templates<br />16:54 Few-shot Prompting<br />23:11 Chain of Thought Prompting<br />...<br />https://www.youtube.com/watch?v=jPeOAOvKFHE]]></description><link>https://odysee.com/prompt-templating-and-techniques-in:794d15efda8cb0dbf708fa565f5ab66fc5f9b62c</link><guid isPermaLink="true">https://odysee.com/prompt-templating-and-techniques-in:794d15efda8cb0dbf708fa565f5ab66fc5f9b62c</guid><pubDate>Wed, 11 Jun 2025 12:30:53 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/prompt-templating-and-techniques-in/794d15efda8cb0dbf708fa565f5ab66fc5f9b62c/fea968.mp4" length="368225414" type="video/mp4"/><itunes:title>Prompt Templating and Techniques in LangChain</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/jPeOAOvKFHE"/><itunes:duration>1795</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[LangSmith 101 for AI Observability | Full Walkthrough]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/Iyc80hY2yYk" width="480" alt="thumbnail" title="LangSmith 101 for AI Observability | Full Walkthrough" /></p>LangSmith is a built-in observability service and platform that integrates easily with LangChain. We use LangSmith as an incredibly powerful approach to AI observability throughout the AI Engineers Guide to LangChain. We recommend using it beyond this course for general AI and LLM development with LangChain.<br /><br />📌 Full Course: https://www.aurelio.ai/course/langchain<br />➤ Repo: https://github.com/aurelio-labs/langchain-course<br />👋🏼 New AI Services: https://platform.aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#ai #aiagents #langchain #coding #programming #artificialintellegence #python <br /><br />00:00 LangChain's LangSmith<br />00:24 LangSmith Setup<br />02:37 LangSmith Tracing<br />04:54 Custom LangSmith Traceables<br />08:18 LangSmith Conclusion<br />...<br />https://www.youtube.com/watch?v=Iyc80hY2yYk]]></description><link>https://odysee.com/langsmith-101-for-ai-observability-full:52e7ffcc7a5ca9fa8131bd932aabb68feb2ac956</link><guid isPermaLink="true">https://odysee.com/langsmith-101-for-ai-observability-full:52e7ffcc7a5ca9fa8131bd932aabb68feb2ac956</guid><pubDate>Tue, 03 Jun 2025 12:00:56 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/langsmith-101-for-ai-observability-full/52e7ffcc7a5ca9fa8131bd932aabb68feb2ac956/67549b.mp4" length="83773722" type="video/mp4"/><itunes:title>LangSmith 101 for AI Observability | Full Walkthrough</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/Iyc80hY2yYk"/><itunes:duration>541</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[LangChain v0.3 — Getting Started]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/i2jGTcJsDPM" width="480" alt="thumbnail" title="LangChain v0.3 — Getting Started" /></p>LangChain is one of the most popular open-source libraries for AI Engineers. Its goal is to abstract away the complexity of building AI software, provide easy-to-use building blocks, and facilitate switching between AI service providers.<br /><br />In this chapter, we will introduce LangChain by building a simple LLM-powered assistant.<br /><br />📌 Full Course: https://www.aurelio.ai/course/langchain<br />➤ Repo: https://github.com/aurelio-labs/langchain-course<br />👋🏼 New AI Services: https://platform.aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#ai #coding #aiagents #langchain <br /><br />00:00 Getting Started with LangChain<br />00:46 Local Setup<br />03:32 Colab Setup<br />04:43 Initializing our OpenAI LLMs<br />09:06 LLM Prompting<br />10:31 LangChain Prompt Templates<br />15:20 Creating a LLM Chain with LCEL<br />20:31 Another Text Generation Pipeline<br />23:43 Structured Outputs in LangChain<br />28:27 Image Generation in LangChain<br />...<br />https://www.youtube.com/watch?v=i2jGTcJsDPM]]></description><link>https://odysee.com/langchain-v0.3-%E2%80%94-getting-started:5b4dc3e35854fc6cdaa623bb9dc65e0968d300e6</link><guid isPermaLink="true">https://odysee.com/langchain-v0.3-%E2%80%94-getting-started:5b4dc3e35854fc6cdaa623bb9dc65e0968d300e6</guid><pubDate>Thu, 29 May 2025 12:45:06 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/langchain-v0.3-—-getting-started/5b4dc3e35854fc6cdaa623bb9dc65e0968d300e6/5c47b2.mp4" length="309302792" type="video/mp4"/><itunes:title>LangChain v0.3 — Getting Started</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/i2jGTcJsDPM"/><itunes:duration>2027</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[When Should You Use LangChain?]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/d2-R_o5I5FQ" width="480" alt="thumbnail" title="When Should You Use LangChain?" /></p>LangChain is one of (if not the) most popular open-source AI frameworks. It works well for many things and less so for many others. Here, we will discuss the when and why of using LangChain compared to other frameworks.<br /><br />📌 Full Course:<br />https://www.aurelio.ai/course/langchain<br /><br />👋🏼 Aurelio AI:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#ai #coding #aiagents #langchain <br /><br />00:00 When to use LangChain<br />00:52 Do I need a framework?<br />04:03 LangChain for Learning<br />06:03 Moving on from LangChain<br />07:46 Should you use Langchain?<br />...<br />https://www.youtube.com/watch?v=d2-R_o5I5FQ]]></description><link>https://odysee.com/when-should-you-use-langchain:3bffd52a1ebd0f3708fb3848748c24257b3480ad</link><guid isPermaLink="true">https://odysee.com/when-should-you-use-langchain:3bffd52a1ebd0f3708fb3848748c24257b3480ad</guid><pubDate>Thu, 29 May 2025 12:30:57 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/when-should-you-use-langchain/3bffd52a1ebd0f3708fb3848748c24257b3480ad/eb3f48.mp4" length="219211044" type="video/mp4"/><itunes:title>When Should You Use LangChain?</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/d2-R_o5I5FQ"/><itunes:duration>559</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[LoRA Fine-tuning Tiny LLMs as Expert Agents]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/ZQfibo521Lc" width="480" alt="thumbnail" title="LoRA Fine-tuning Tiny LLMs as Expert Agents" /></p>Tiny LLMs have never been ideal for agentic workflows. They lack the ability to reliably generate function calls; however, this isn't due to any real limitation on LLM size. Instead, it's due to the LLM providers' lack of focus on data that provides quality examples of function calling.<br /><br />Because of that, we can fine-tune expert agents from tiny LLMs such as the 1B parameter Llama 3.2 and get incredible results. In this video, we do just that - we take llama-3.2-1b-instruct, Salesforce's xLAM dataset, and Low-Rank Adaptation (LoRA) fine-tuning via NVIDIA's NeMo Microservices, to create our own tiny LLM agent.<br /><br />Thanks to NVIDIA for sponsoring the video!<br /><br />📌 NeMo Microservices: https://nvda.ws/4mqG2wH<br />🚢 Deploying NeMo Code: https://github.com/aurelio-labs/cookbook/blob/main/gen-ai/training/lora/nvidia-nemo/deploying-nemo.ipynb<br />👾 LoRA Fine-tuning Code: https://github.com/aurelio-labs/cookbook/blob/main/gen-ai/training/lora/nvidia-nemo/nemo-lora-function-calling.ipynb<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />X: https://x.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#ai #aiagents #artificialintellegence #programming #coding <br /><br />00:00 LoRA Fine-tuning Agents<br />01:34 NeMo Microservices<br />03:19 NeMo Deployment<br />07:49 Deploying NeMo Microservices<br />16:54 xLAM Dataset Preparation<br />26:49 Train Validation Test Split<br />28:59 NeMo Data Store and Entity Store<br />34:14 LoRA Training with NeMo Customizer<br />42:03 Deploying NIMs<br />47:10 Chat Completion with NVIDIA NIMs<br />49:47 NVIDIA NeMo Microservices<br />...<br />https://www.youtube.com/watch?v=ZQfibo521Lc]]></description><link>https://odysee.com/lora-fine-tuning-tiny-llms-as-expert:f35bcca2d7234b468135a85a5502762a02fee208</link><guid isPermaLink="true">https://odysee.com/lora-fine-tuning-tiny-llms-as-expert:f35bcca2d7234b468135a85a5502762a02fee208</guid><pubDate>Tue, 27 May 2025 16:46:53 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/lora-fine-tuning-tiny-llms-as-expert/f35bcca2d7234b468135a85a5502762a02fee208/ffc192.mp4" length="493982880" type="video/mp4"/><itunes:title>LoRA Fine-tuning Tiny LLMs as Expert Agents</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/ZQfibo521Lc"/><itunes:duration>3163</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[Stateful and Fault-Tolerant AI Agents]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/14vQqJ9WG6U" width="480" alt="thumbnail" title="Stateful and Fault-Tolerant AI Agents" /></p>Temporal is a distributed, open-source workflow orchestration platform built by ex-Uber engineers to enable massive-scale workflow orchestration. In this talk, Bogdan from Aurelio AI explains how Temporal can be used for AI agents to build fault-tolerant, stateful, and durable AI agent workflows. We cover a hands-on AI agents tutorial using Temporal workflow orchestration engine.<br /><br />👋🏼 Aurelio AI: https://aurelio.ai/learn<br />🦜🔗 AI Engineer's Guide to LangChain: https://aurelio.ai/course/langchain<br />🔬 AI Platform: https://platform.aurelio.ai<br /><br />#ai #aiagents #artificialintellegence #programming #coding <br /><br />00:00 What is Temporal?<br />04:25 Temporal 101<br />24:36 Temporal AI Agent Demo<br />35:38 Temporal Agent Code<br />56:48 Questions<br />...<br />https://www.youtube.com/watch?v=14vQqJ9WG6U]]></description><link>https://odysee.com/stateful-and-fault-tolerant-ai-agents:2495118fcf8893bf5a19d71a5ce48eb0425b12cf</link><guid isPermaLink="true">https://odysee.com/stateful-and-fault-tolerant-ai-agents:2495118fcf8893bf5a19d71a5ce48eb0425b12cf</guid><pubDate>Fri, 23 May 2025 13:00:03 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/stateful-and-fault-tolerant-ai-agents/2495118fcf8893bf5a19d71a5ce48eb0425b12cf/7c0d0d.mp4" length="133498776" type="video/mp4"/><itunes:title>Stateful and Fault-Tolerant AI Agents</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/14vQqJ9WG6U"/><itunes:duration>3656</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[OpenAI Agents SDK Handoffs | Deep Dive Tutorial]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/LuarehusOWU" width="480" alt="thumbnail" title="OpenAI Agents SDK Handoffs | Deep Dive Tutorial" /></p>In OpenAI's Agents SDK, we can build multi-agent workflows in two ways. The first is agents-as-tools, which follow an orchestrator-subagent pattern. The second is using handoffs, which allow agents to pass control over to other agents. In this video, we'll dive into handoffs using Agents SDK and the latest LLMs, including gpt-4.1-mini and gpt-4.1 — using a multi-agent workflow containing a web search agent, dummy RAG agent, and code execution agent.<br /><br />📖 Article: https://www.aurelio.ai/learn/agents-sdk-multi-agent<br />📌 Code: https://github.com/aurelio-labs/agents-sdk-course/blob/main/chapters/04-multi-agent.ipynb<br />🔗 LinkUp: https://app.linkup.so/?utm_source=james (Affiliate link)<br />👋🏼 AI Platform: https://platform.aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#ai #coding #aiagents #programming #artificialintellegence #python <br /><br />00:00 Agents SDK Handoff<br />05:06 Code Start<br />07:12 Web Search Agent<br />10:14 RAG Agent<br />11:18 Code Execution Agent<br />11:46 Defining the Orchestrator<br />12:36 Agents SDK Handoffs<br />20:27 Using OpenAI Traces Dashboard<br />23:26 More Handoff Testing<br />26:20 Other Handoff Features<br />28:45 Agents SDK on_handoff<br />29:51 Agents SDK Handoff input_type<br />31:28 Agents SDK Handoff input_filter<br />...<br />https://www.youtube.com/watch?v=LuarehusOWU]]></description><link>https://odysee.com/openai-agents-sdk-handoffs-deep-dive:9b02feb4fa42559ed194da15a579e9f7a208e5a1</link><guid isPermaLink="true">https://odysee.com/openai-agents-sdk-handoffs-deep-dive:9b02feb4fa42559ed194da15a579e9f7a208e5a1</guid><pubDate>Sat, 17 May 2025 13:00:38 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/openai-agents-sdk-handoffs-deep-dive/9b02feb4fa42559ed194da15a579e9f7a208e5a1/02f122.mp4" length="499868697" type="video/mp4"/><itunes:title>OpenAI Agents SDK Handoffs | Deep Dive Tutorial</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/LuarehusOWU"/><itunes:duration>2089</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[Multi-Agent Systems in OpenAI's Agents SDK | Full Tutorial]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/2MYzc79Lj04" width="480" alt="thumbnail" title="Multi-Agent Systems in OpenAI's Agents SDK | Full Tutorial" /></p>OpenAI's Agents SDK provides various ways for building multi-agent systems. Here we focus on the agents-as-tools method to build an orchestrator-subagent system. Throughout the video we'll be using OpenAI's new gpt-4.1 and gpt-4.1-mini models while building a full multi-agent workflow capable of searching the web, using internal documents, and executing code.<br /><br />📌 Code:<br />https://github.com/aurelio-labs/agents-sdk-course/blob/main/chapters/04-multi-agent.ipynb<br />📖 Article:<br />https://www.aurelio.ai/learn/agents-sdk-multi-agent<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />X: https://x.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#artificialintellegence #ai #aiagents #programming #python <br /><br />00:00 OpenAI's Agents SDK<br />01:38 Python Setup<br />02:51 Orchestrator Subagent<br />05:57 Web Search Subagent<br />11:40 RAG Subagent<br />17:15 Code Execution Subagent<br />23:44 Orchestrator Agent<br />28:44 Evaluating our Multi-Agent Workflow<br />39:31 Pros and Cons of Orchestrators<br />...<br />https://www.youtube.com/watch?v=2MYzc79Lj04]]></description><link>https://odysee.com/multi-agent-systems-in-openai%27s-agents:bc0c4c5f1deb3d20d20d88a51d22ed1468be9e72</link><guid isPermaLink="true">https://odysee.com/multi-agent-systems-in-openai%27s-agents:bc0c4c5f1deb3d20d20d88a51d22ed1468be9e72</guid><pubDate>Thu, 08 May 2025 13:00:07 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/multi-agent-systems-in-openai&apos;s-agents/bc0c4c5f1deb3d20d20d88a51d22ed1468be9e72/cade8f.mp4" length="456631494" type="video/mp4"/><itunes:title>Multi-Agent Systems in OpenAI&apos;s Agents SDK | Full Tutorial</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/2MYzc79Lj04"/><itunes:duration>2665</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[AI Voice Assistants with OpenAI's Agents SDK | Full Tutorial + Code]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/xMkbcSiS47o" width="480" alt="thumbnail" title="AI Voice Assistants with OpenAI's Agents SDK | Full Tutorial + Code" /></p>Voice-based AI agents represent a huge opportunity for engineers and are likely to dominate the various ways that we interface with AI in the near future. In this video, we look at building voice AI agent using OpenAI's Agents SDK and their new gpt 4.1 models (specifically gpt 4.1 nano).<br /><br />📌 Code:<br />https://github.com/aurelio-labs/agents-sdk-course/blob/main/chapters/07-voice.ipynb<br /><br />📖 Article:<br />https://www.aurelio.ai/learn/agents-sdk-voice<br /><br />👋🏼 New AI Platform:<br />https://platform.aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#ai #aiagents #coding #programming #artificialintellegence #python <br /><br />00:00 AI Voice Assistants<br />00:58 Getting the Code<br />02:19 Handling Audio in Python<br />06:56 Agents SDK Voice Pipeline<br />11:02 Speaking to the Agent<br />13:38 Chat with Voice Agent<br />15:31 Voice Agents Conclusion<br />...<br />https://www.youtube.com/watch?v=xMkbcSiS47o]]></description><link>https://odysee.com/ai-voice-assistants-with-openai%27s-agents:bc3cd9a9363e3a15ec57073c0b0397d0e7cc4287</link><guid isPermaLink="true">https://odysee.com/ai-voice-assistants-with-openai%27s-agents:bc3cd9a9363e3a15ec57073c0b0397d0e7cc4287</guid><pubDate>Thu, 01 May 2025 13:01:42 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/ai-voice-assistants-with-openai&apos;s-agents/bc3cd9a9363e3a15ec57073c0b0397d0e7cc4287/b702f2.mp4" length="161198907" type="video/mp4"/><itunes:title>AI Voice Assistants with OpenAI&apos;s Agents SDK | Full Tutorial + Code</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/xMkbcSiS47o"/><itunes:duration>1105</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[Cogito v1 Outperforms Llama 4 | Full Tutorial with LM Studio and LiteLLM]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/bEPYba4bxs8" width="480" alt="thumbnail" title="Cogito v1 Outperforms Llama 4 | Full Tutorial with LM Studio and LiteLLM" /></p>Cogito v1 is the latest in state-of-the-art (SotA) open source / open weight LLMs. It manages to outperform even Llama 4 in various benchmarks and weight classes. In this video, we see how to deploy and use Cogito v1 locally with LiteLLM and LM Studio. We also build this out to enable full agent logic with tool use / function calling.<br /><br />📕 Article: https://www.aurelio.ai/learn/cogito-v1<br />📌 Code: https://github.com/aurelio-labs/cookbook/blob/main/gen-ai/local/lmstudio/cogito-v1.ipynb<br />👾 New AI Platform: https://platform.aurelio.ai/<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#aiagents #llm #coding #ai #artificialintellegence #python #programming <br /><br />00:00 Cogito v1<br />00:26 Local LLMs with LM Studio<br />02:38 Python Setup with uv<br />03:49 Loading our LLM in LM Studio<br />04:31 Using LM Studio with Python<br />05:17 Using LiteLLM with LM Studio<br />09:14 Tools and Agents<br />12:29 Create a Web Search Tool<br />16:19 Tool Calling with LiteLLM<br />20:54 Building a Local Agent<br />...<br />https://www.youtube.com/watch?v=bEPYba4bxs8]]></description><link>https://odysee.com/cogito-v1-outperforms-llama-4-full:aaf259b4f563bc6a8573868888ccdd1f2bfccc9b</link><guid isPermaLink="true">https://odysee.com/cogito-v1-outperforms-llama-4-full:aaf259b4f563bc6a8573868888ccdd1f2bfccc9b</guid><pubDate>Tue, 22 Apr 2025 13:01:09 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/cogito-v1-outperforms-llama-4-full/aaf259b4f563bc6a8573868888ccdd1f2bfccc9b/b03445.mp4" length="200719620" type="video/mp4"/><itunes:title>Cogito v1 Outperforms Llama 4 | Full Tutorial with LM Studio and LiteLLM</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/bEPYba4bxs8"/><itunes:duration>1484</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[Advanced Guardrails for AI Agents]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/rMUycP_cp9g" width="480" alt="thumbnail" title="Advanced Guardrails for AI Agents" /></p>In this video we'll learn how to build advanced AI guardrails as part of a broader protective system for production-ready AI systems. We explore the use of hybrid vector space to develop this guardrails, which can be incredibly useful for chatbot use-cases with specific brand, topic, or behavioral guardrails.<br /><br />📌 Code:<br />https://github.com/aurelio-labs/semantic-router/blob/main/docs/examples/hybrid-chat-guardrails.ipynb<br />⚠️ API keys:<br />- OpenAI https://platform.openai.com/api-keys<br />- Aurelio AI https://platform.aurelio.ai/settings/api-keys<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#artificialintellegence #aiagents #coding #ai #programming <br /><br />00:00 Why Guardrails<br />00:20 Guardrails for Agents<br />03:40 Sparse and Dense Vectors<br />08:00 Hybrid Guardrails with Python<br />12:29 Initializing the HybridRouter<br />15:01 Optimizing our Hybrid Guardrails<br />19:16 Testing our Hybrid Router<br />20:46 AI Guardrails in Context<br />...<br />https://www.youtube.com/watch?v=rMUycP_cp9g]]></description><link>https://odysee.com/advanced-guardrails-for-ai-agents:276ce80bfa75253e9ea106708d0e3872dec3202f</link><guid isPermaLink="true">https://odysee.com/advanced-guardrails-for-ai-agents:276ce80bfa75253e9ea106708d0e3872dec3202f</guid><pubDate>Tue, 01 Apr 2025 17:28:21 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/advanced-guardrails-for-ai-agents/276ce80bfa75253e9ea106708d0e3872dec3202f/0ae75b.mp4" length="304300757" type="video/mp4"/><itunes:title>Advanced Guardrails for AI Agents</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/rMUycP_cp9g"/><itunes:duration>1330</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[Mistral AI Agent with Streaming + Tools]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/oaIBkEdITRQ" width="480" alt="thumbnail" title="Mistral AI Agent with Streaming + Tools" /></p>Mistral AI is one of the big AI labs providing various models including SotA LLM and embedding models, making them an ideal sole provider for retrieval tasks (such as RAG) that require both generation via LLMs and retrieval via embedding models.<br /><br />The Aurelio Platform provides several utility services to help AI engineers build RAG and GenAI applications faster.<br /><br />In this example we're going to use both of these together to create a "chat-with-video" AI pipeline. We'll see how to:<br /><br />1. Take any YouTube video and transcribe it to text using Aurelio's video-to-text endpoint.<br />2. Use Mistral LLMs to chat with our transcribed video content.<br />3. Add chat history to make our AI conversational.<br />4. Integrate async and streaming for a better UX and improved scalability.<br />5. See how we can optimize response latency and costs by reducing overall token count using semantic similarity, using Aurelio's chunking endpoint and Mistral's embedding models.<br /><br />📌 Code:<br />https://github.com/aurelio-labs/cookbook/blob/main/gen-ai/agents/video-agent.ipynb<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#aiagents #mistralai #ai #coding <br /><br />00:00 Mistral AI Agent<br />00:47 Python Setup<br />01:44 Video Transcription<br />03:25 Agent Overview<br />07:03 Using Mistral<br />09:26 Adding Agent Chat History<br />11:23 Async and Streaming<br />17:53 Agent Token Usage<br />19:17 Building a Retrieval Agent<br />25:52 Creating a Tool for Mistral<br />33:40 Tool Execution Logic<br />...<br />https://www.youtube.com/watch?v=oaIBkEdITRQ]]></description><link>https://odysee.com/mistral-ai-agent-with-streaming-%2B-tools:3278d212b0f48c5d963fc53dc2c60b739c17f078</link><guid isPermaLink="true">https://odysee.com/mistral-ai-agent-with-streaming-%2B-tools:3278d212b0f48c5d963fc53dc2c60b739c17f078</guid><pubDate>Thu, 13 Mar 2025 16:51:40 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/mistral-ai-agent-with-streaming-+-tools/3278d212b0f48c5d963fc53dc2c60b739c17f078/a61a7e.mp4" length="544437704" type="video/mp4"/><itunes:title>Mistral AI Agent with Streaming + Tools</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/oaIBkEdITRQ"/><itunes:duration>2584</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[Agents SDK from OpenAI! | Full Tutorial]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/35nxORG1mtg" width="480" alt="thumbnail" title="Agents SDK from OpenAI! | Full Tutorial" /></p>OpenAI have released an Agents SDK, their version of an open source agent development library akin to LangChain, Llama-Index, Pydantic AI, and others.<br /><br />OpenAI have outlined a few features of the library in their announcement blog post:<br /><br />- Agent Loop: Automated loop for tool calls and LLM interactions until completion.<br />- Python-First: Leverage Python features for agent orchestration without new abstractions.<br />- Handoffs: Seamless coordination and delegation between multiple agents.<br />- Guardrails: Parallel input validations to halt processes on failure.<br />- Function Tools: Convert Python functions into tools with auto-schema and validation.<br />- Tracing: Visualize, debug, and monitor workflows with OpenAI tools integration.<br /><br />We'll focus on covering the essentials here - including the agent loop, python-first, guardrails, and function tools features.<br /><br />📌 Code:<br />https://github.com/aurelio-labs/cookbook/blob/main/gen-ai/openai/agents-sdk-intro.ipynb<br />📚 5 Hour LangChain Course:<br />https://www.aurelio.ai/course/langchain<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#aiagents #openai #ai #coding #artificialintellegence #programming <br /><br />00:00 OpenAI Agents SDK<br />01:05 Agents SDK Code<br />02:38 Agent and Runner<br />06:56 Function Tools<br />12:13 Agents SDK Guardrails<br />18:29 Conversational Agents<br />21:02 Thoughts on Agents SDK<br />...<br />https://www.youtube.com/watch?v=35nxORG1mtg]]></description><link>https://odysee.com/agents-sdk-from-openai!-full-tutorial:35bc62753092cdde7746c5117638dcdb2694175f</link><guid isPermaLink="true">https://odysee.com/agents-sdk-from-openai!-full-tutorial:35bc62753092cdde7746c5117638dcdb2694175f</guid><pubDate>Wed, 12 Mar 2025 17:27:13 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/agents-sdk-from-openai!-full-tutorial/35bc62753092cdde7746c5117638dcdb2694175f/2b1bf9.mp4" length="256147752" type="video/mp4"/><itunes:title>Agents SDK from OpenAI! | Full Tutorial</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/35nxORG1mtg"/><itunes:duration>1343</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[NEW Pinecone Assistant Features + GA Release!]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/kGrSJKBfQZM" width="480" alt="thumbnail" title="NEW Pinecone Assistant Features + GA Release!" /></p>First look at the new Pinecone Assistant, including the Chat API, Context API, and OpenAI-compatible Chat Completions API.<br /><br />📌 Code:<br />https://github.com/pinecone-io/examples/blob/master/learn/assistant/yorkshire-assistant.ipynb<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#ai #aichatbot #artificialintellegence #aiagents <br /><br />00:00 New AI Assistant<br />00:40 Pinecone Assistant Release<br />02:38 Pinecone Assistant Python Client<br />03:39 Assistant Custom Instructions<br />07:05 Pinecone Assistant APIs<br />09:42 Assistant Chat API<br />19:37 Context API<br />21:37 Pinecone Chat Completions<br />24:42 Deleting our Assistant and Concluding<br />...<br />https://www.youtube.com/watch?v=kGrSJKBfQZM]]></description><link>https://odysee.com/new-pinecone-assistant-features-%2B-ga:e6d7d1d7d64ebdaf82cd0b5a5b4a47f40d41e939</link><guid isPermaLink="true">https://odysee.com/new-pinecone-assistant-features-%2B-ga:e6d7d1d7d64ebdaf82cd0b5a5b4a47f40d41e939</guid><pubDate>Thu, 23 Jan 2025 00:00:00 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/new-pinecone-assistant-features-+-ga/e6d7d1d7d64ebdaf82cd0b5a5b4a47f40d41e939/3d11d3.mp4" length="246024659" type="video/mp4"/><itunes:title>NEW Pinecone Assistant Features + GA Release!</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/kGrSJKBfQZM"/><itunes:duration>1545</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[What's next for Semantic Router (v1 update)]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/vjMN8FSGiQI" width="480" alt="thumbnail" title="What's next for Semantic Router (v1 update)" /></p>Talking through an update on the progress for Semantic Router's first major release, ie v0.1.0. It comes with full HybridRouter support, features to make the library ready for production environments, and much more.<br /><br />📌 Article:<br />https://www.aurelio.ai/learn/semantic-router-update-jan<br /><br />⭐️ Repo:<br />https://github.com/aurelio-labs/semantic-router<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />00:00 Semantic Router<br />01:13 Keeping Semantic Router Lightweight<br />03:32 Current State of SR v1<br />03:58 Modular Routers Encoders and Indexes<br />06:27 Semantic Router Synchronization<br />10:30 Full Async Support<br />12:07 HybridRouter Upgrades<br />12:46 New Semantic Router Integrations<br />13:55 Testing and Doc Upgrades<br />16:04 Getting Started with v1<br /><br />#artificialintelligence #ai #aiagents #python<br />...<br />https://www.youtube.com/watch?v=vjMN8FSGiQI]]></description><link>https://odysee.com/what%27s-next-for-semantic-router-%28v1:b03901d6b4500f51b9285340d8409586ec385bde</link><guid isPermaLink="true">https://odysee.com/what%27s-next-for-semantic-router-%28v1:b03901d6b4500f51b9285340d8409586ec385bde</guid><pubDate>Tue, 14 Jan 2025 00:00:00 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/what&apos;s-next-for-semantic-router-(v1/b03901d6b4500f51b9285340d8409586ec385bde/28dc76.mp4" length="188041928" type="video/mp4"/><itunes:title>What&apos;s next for Semantic Router (v1 update)</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/vjMN8FSGiQI"/><itunes:duration>1064</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[Gemini 2 Agent + Google Search and Citations]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/G8kwhCf_xCI" width="480" alt="thumbnail" title="Gemini 2 Agent + Google Search and Citations" /></p>In this video we build a web search agent with Gemini 2. We're using gemini-2.0-flash-exp with Google's built-in Google search tool. We'll take a look at grounding our model response with inline citations.<br /><br />📌 Code:<br />https://github.com/aurelio-labs/cookbook/blob/main/gen-ai/google-ai/gemini-2/web-search.ipynb<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#artificialintelligence #ai #aiagents <br /><br />00:00 Gemini with Google Search<br />01:36 Gemini Web Search in Python<br />04:20 Using Gemini 2 Flash<br />06:07 Google GenAI Libraries<br />06:59 Using Google Search with Gemini<br />09:51 Grounding Gemini Responses<br />13:46 Inserting Citations for Gemini<br />20:30 Why use Citations<br />...<br />https://www.youtube.com/watch?v=G8kwhCf_xCI]]></description><link>https://odysee.com/gemini-2-agent-%2B-google-search-and:bfb3a626269f4d4afd61105c7d1c920ce3a3ad1e</link><guid isPermaLink="true">https://odysee.com/gemini-2-agent-%2B-google-search-and:bfb3a626269f4d4afd61105c7d1c920ce3a3ad1e</guid><pubDate>Thu, 19 Dec 2024 00:00:00 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/gemini-2-agent-+-google-search-and/bfb3a626269f4d4afd61105c7d1c920ce3a3ad1e/3e378d.mp4" length="250982794" type="video/mp4"/><itunes:title>Gemini 2 Agent + Google Search and Citations</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/G8kwhCf_xCI"/><itunes:duration>1386</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[Gemini 2 Multimodal and Spatial Awareness in Python]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/zBiKWxonD-o" width="480" alt="thumbnail" title="Gemini 2 Multimodal and Spatial Awareness in Python" /></p>We test Google Deepmind's new Gemini 2 (gemini-flash-2.0-exp) multimodal capabilities and spatial awareness. Gemini has impressive structured output reliability as and we'll see with a few bounding box examples, very good spatial awareness — but it isn't perfect.<br /><br />We'll see in the near future (and with a few more videos) how Gemini compares to OpenAI's models (such as gpt-4, gpt-4o, and o1) and whether we finally have a worthy competitor to OpenAI's dominance in production level AI application.<br /><br />📌 Code:<br />https://github.com/aurelio-labs/cookbook/blob/main/gen-ai/google-ai/gemini-2/multimodal.ipynb<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#ai #artificialintelligence #aichatbot #python <br /><br />00:00 Gemini 2 Multimodal<br />00:41 Gemini Focus on Agents<br />01:53 Running the Code<br />03:08 Asking Gemini to Describe Images<br />09:29 Gemini Image Bounding Boxes<br />21:06 Gemini Spatial Awareness Example 2<br />23:29 Gemini Spatial Awareness Example 3<br />26:52 Gemini Spatial Awareness Example 4<br />29:09 Gemini Image-to-Text<br />30:50 Google Gemini vs OpenAI GPTs<br />...<br />https://www.youtube.com/watch?v=zBiKWxonD-o]]></description><link>https://odysee.com/gemini-2-multimodal-and-spatial:2e52854109f101e251dda71c8eac883baae77375</link><guid isPermaLink="true">https://odysee.com/gemini-2-multimodal-and-spatial:2e52854109f101e251dda71c8eac883baae77375</guid><pubDate>Tue, 17 Dec 2024 00:00:00 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/gemini-2-multimodal-and-spatial/2e52854109f101e251dda71c8eac883baae77375/6adc09.mp4" length="311131532" type="video/mp4"/><itunes:title>Gemini 2 Multimodal and Spatial Awareness in Python</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/zBiKWxonD-o"/><itunes:duration>1962</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[Better Chatbots with Semantic Routes]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/uCVRfRVNRHQ" width="480" alt="thumbnail" title="Better Chatbots with Semantic Routes" /></p>We explore how to use semantic router for various conversational AI use-cases. Focusing on the conceptual logic behind using semantic router for better control over LLM / agent behaviours, tool use, etc.<br /><br />📌 Code:<br />https://github.com/aurelio-labs/semantic-router/blob/main/docs/00-introduction.ipynb<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#ai #artificialintelligence #python #aiagents #chatbot <br /><br />00:00 Semantic Router<br />00:37 Concept of Semantic Routers<br />07:42 Routes and Utterances<br />15:11 Encoders<br />16:26 New Routers<br />20:48 Semantic Routes for Chat<br />21:29 LLM Output Guardrails<br />28:25 Fine-grained control of LLMs<br />29:10 Routes for Tool Use<br />32:01 LLM Routing<br />34:34 Outro<br />...<br />https://www.youtube.com/watch?v=uCVRfRVNRHQ]]></description><link>https://odysee.com/better-chatbots-with-semantic-routes:b83db5f9cad10e3ebdb6beef1aed654752f0a0bc</link><guid isPermaLink="true">https://odysee.com/better-chatbots-with-semantic-routes:b83db5f9cad10e3ebdb6beef1aed654752f0a0bc</guid><pubDate>Tue, 03 Dec 2024 00:00:00 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/better-chatbots-with-semantic-routes/b83db5f9cad10e3ebdb6beef1aed654752f0a0bc/0776db.mp4" length="177119757" type="video/mp4"/><itunes:title>Better Chatbots with Semantic Routes</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/uCVRfRVNRHQ"/><itunes:duration>2127</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[AI Agents as Neuro-Symbolic Systems?]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/JaHfCrVTYF4" width="480" alt="thumbnail" title="AI Agents as Neuro-Symbolic Systems?" /></p>Thinking through AI agents and the neuro-symbolic definition from an early LLM agent paper called MRKL. I'm sharing my reasoning behind using the "neuro-symbolic system" definition for AI agents.<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#artificialintelligence #ai #python #machinelearning <br /><br />00:00 AI Agents<br />02:04 ReAct Agents<br />07:28 Redefining Agents<br />12:48 Origins of Connectionism<br />17:23 Neuro-symbolic AI<br />21:09 Agents without LLMs<br />25:21 Broader Definition of Agents<br />...<br />https://www.youtube.com/watch?v=JaHfCrVTYF4]]></description><link>https://odysee.com/ai-agents-as-neuro-symbolic-systems:ca4878a268f160bffdc95c725f431e39105606e9</link><guid isPermaLink="true">https://odysee.com/ai-agents-as-neuro-symbolic-systems:ca4878a268f160bffdc95c725f431e39105606e9</guid><pubDate>Tue, 19 Nov 2024 00:00:00 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/ai-agents-as-neuro-symbolic-systems/ca4878a268f160bffdc95c725f431e39105606e9/fb19ad.mp4" length="207074256" type="video/mp4"/><itunes:title>AI Agents as Neuro-Symbolic Systems?</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/JaHfCrVTYF4"/><itunes:duration>1739</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[Llama Index Workflows | Building Async AI Agents]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/KMZBLBAfE1s" width="480" alt="thumbnail" title="Llama Index Workflows | Building Async AI Agents" /></p>Llama Index Workflows is an event-driven framework for building AI agents. It aims to provide AI Engineers with a structured conceptual frame around which we can build AI software, similar in some respects to LangChain's LangGraph. In this video, we'll compare the two frameworks (Llama Index Workflows and Langchain's LangGraph), learn how to use Workflows, and build an async research agent with the library.<br /><br />📌 Code:<br />https://github.com/pinecone-io/examples/blob/master/learn/generation/llama-index/llama-index-research-agent.ipynb<br /><br />🌲 Subscribe for Latest Articles and Videos:<br />https://www.pinecone.io/newsletter-signup/<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#artificialintelligence #llamaindex #coding #programming <br /><br />00:00 Llama Index Workflows<br />00:53 Llama Index vs. LangGraph<br />05:27 Python Prerequisites<br />06:40 Building Knowledge Base<br />08:20 Defining Agent Tools<br />11:02 Defining the LLM<br />12:31 Llama Index Workflow Events<br />14:00 Llama Index Agent Workflow<br />24:25 Debugging our Workflow<br />26:47 Using and Tweaking our Agent<br />30:05 Testing Llama Index Async<br />...<br />https://www.youtube.com/watch?v=KMZBLBAfE1s]]></description><link>https://odysee.com/llama-index-workflows-building-async-ai:da8675fddfe7846128a4eecc0ca842e68e6bc921</link><guid isPermaLink="true">https://odysee.com/llama-index-workflows-building-async-ai:da8675fddfe7846128a4eecc0ca842e68e6bc921</guid><pubDate>Tue, 24 Sep 2024 00:00:00 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/llama-index-workflows-building-async-ai/da8675fddfe7846128a4eecc0ca842e68e6bc921/dc3936.mp4" length="306612939" type="video/mp4"/><itunes:title>Llama Index Workflows | Building Async AI Agents</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/KMZBLBAfE1s"/><itunes:duration>2088</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[Local LangGraph Agents with Llama 3.1 + Ollama]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/5a-NuqTaC20" width="480" alt="thumbnail" title="Local LangGraph Agents with Llama 3.1 + Ollama" /></p>LangGraph is one of the most versatile Python libraries for building AI agents. We can combine LangChain's LangGraph with Ollama and Llama 3.1 to build highly custom and fully local LLM agents. In this video, we will do exactly that by building a pizza recommendation agent using the Reddit API.<br /><br />📌 Code:<br />https://github.com/pinecone-io/examples/tree/master/learn/generation/langchain/langgraph/02-ollama-langgraph-agent<br /><br />💻 LangGraph Intro Video:<br />https://youtu.be/usOmwLZNVuM<br /><br />🌲 Subscribe for Latest Articles and Videos:<br />https://www.pinecone.io/newsletter-signup/<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#langchain #ai #artificialintelligence #ollama #meta <br /><br />00:00 Local Agents with LangGraph and Ollama<br />01:00 Setting up Ollama and Python<br />05:35 Reddit API Tool<br />12:40 Overview of the Graph<br />17:11 Final Answer Tool<br />18:33 Agent State<br />19:09 Ollama Llama 3.1 Setup<br />26:21 Organizing Agent Tool Use<br />35:21 Creating Agent Nodes<br />39:14 Building the Agent Graph<br />43:10 Testing the Llama 3.1 Agent<br />46:07 Final Notes on Local Agents<br />...<br />https://www.youtube.com/watch?v=5a-NuqTaC20]]></description><link>https://odysee.com/local-langgraph-agents-with-llama-3.1-%2B:4e1cb7a055bf06c47a4c5b8162c504078013217a</link><guid isPermaLink="true">https://odysee.com/local-langgraph-agents-with-llama-3.1-%2B:4e1cb7a055bf06c47a4c5b8162c504078013217a</guid><pubDate>Thu, 29 Aug 2024 00:00:00 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/local-langgraph-agents-with-llama-3.1-+/4e1cb7a055bf06c47a4c5b8162c504078013217a/a954d8.mp4" length="360041426" type="video/mp4"/><itunes:title>Local LangGraph Agents with Llama 3.1 + Ollama</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/5a-NuqTaC20"/><itunes:duration>2874</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[LangGraph Deep Dive: Build Better Agents]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/usOmwLZNVuM" width="480" alt="thumbnail" title="LangGraph Deep Dive: Build Better Agents" /></p>LangGraph is an agent framework from LangChain that allows us to develop agents via graphs. By building agents using graphs we have much more control and flexibility in our AI agent execution path.<br /><br />In this video, we will build an AI research agent using LangGraph. Research agents are multi-step LLM agents that can produce in-depth research reports on a topic of our choosing through multiple steps.<br /><br />We will see how we can build our own AI research agent using gpt-4o, Pinecone, LangGraph, arXiv, and Google via the SerpAPI.<br /><br />📌 Code:<br />https://colab.research.google.com/github/pinecone-io/examples/blob/master/learn/generation/langchain/langgraph/01-gpt-4o-research-agent.ipynb<br /><br />📖 Article:<br />https://www.pinecone.io/learn/langgraph-research-agent/<br /><br />🌲 Subscribe for Latest Articles and Videos:<br />https://www.pinecone.io/newsletter-signup/<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#artificialintelligence #langchain #llm #python #rag <br /><br />00:00 LangGraph Agents<br />02:04 LangGraph Agent Overview<br />04:46 Short History of Agents and ReAct<br />07:58 Agents as Graphs<br />10:18 LangGraph<br />12:41 Research Agent Components<br />14:30 Building the RAG Pipeline<br />17:28 LangGraph Graph State<br />18:56 Custom Agent Tools<br />19:10 ArXiv Paper Fetch Tool<br />21:22 Web Search Tool<br />22:42 RAG Tools<br />23:57 Final Answer Tool<br />25:10 Agent Decision Making<br />30:16 LangGraph Router and Nodes<br />33:00 Building the LangGraph Graph<br />36:52 Building the Research Agent Report<br />39:39 Testing the Research Agent<br />43:42 Final Notes on Agentic Graphs<br />...<br />https://www.youtube.com/watch?v=usOmwLZNVuM]]></description><link>https://odysee.com/langgraph-deep-dive-build-better-agents:993d71ad1a3bd5d22c3dfd1d2533fb7b183e01a2</link><guid isPermaLink="true">https://odysee.com/langgraph-deep-dive-build-better-agents:993d71ad1a3bd5d22c3dfd1d2533fb7b183e01a2</guid><pubDate>Wed, 07 Aug 2024 00:00:00 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/langgraph-deep-dive-build-better-agents/993d71ad1a3bd5d22c3dfd1d2533fb7b183e01a2/53733d.mp4" length="449493712" type="video/mp4"/><itunes:title>LangGraph Deep Dive: Build Better Agents</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/usOmwLZNVuM"/><itunes:duration>2772</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[RAG with Mistral AI!]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/I0c405L7-9A" width="480" alt="thumbnail" title="RAG with Mistral AI!" /></p>We build an RAG pipeline using Mistral AI's mistral-embed and mistral-large, using Pinecone vector DB as our knowledge base.<br /><br />📌 Code:<br />https://github.com/pinecone-io/examples/blob/master/integrations/mistralai/mistral-rag.ipynb<br /><br />🌲 Subscribe for Latest Articles and Videos:<br />https://www.pinecone.io/newsletter-signup/<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#artificialintelligence #mistralai #rag #llm<br /><br />00:00 RAG with Mistral and Pinecone<br />01:07 Mistral API in Python<br />01:44 Setting up Vector DB<br />03:14 Mistral Embeddings<br />04:12 Creating Pinecone Index<br />08:24 RAG with Mistral<br />11:20 Final Thoughts on Mistral<br />...<br />https://www.youtube.com/watch?v=I0c405L7-9A]]></description><link>https://odysee.com/rag-with-mistral-ai!:0c71e6af65804c3444e24b6838879631347b2846</link><guid isPermaLink="true">https://odysee.com/rag-with-mistral-ai!:0c71e6af65804c3444e24b6838879631347b2846</guid><pubDate>Thu, 11 Jul 2024 00:00:00 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/rag-with-mistral-ai!/0c71e6af65804c3444e24b6838879631347b2846/b61570.mp4" length="97586731" type="video/mp4"/><itunes:title>RAG with Mistral AI!</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/I0c405L7-9A"/><itunes:duration>744</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[Superfast RAG with Llama 3 and Groq]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/ne-lrm0n0bg" width="480" alt="thumbnail" title="Superfast RAG with Llama 3 and Groq" /></p>Groq API provides access to Language Processing Units (LPUs) that enable incredibly fast LLM inference. The service offers several LLMs including Meta's Llama 3. In this video, we'll implement a RAG pipeline using Llama 3 70B via Groq, an open source e5 encoder, and the Pinecone vector database.<br /><br />📌 Code:<br />https://github.com/pinecone-io/examples/blob/master/integrations/groq/groq-llama-3-rag.ipynb<br /><br />🌲 Subscribe for Latest Articles and Videos:<br />https://www.pinecone.io/newsletter-signup/<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#artificialintelligence #llama3 #groq <br /><br />00:00 Groq and Llama 3 for RAG<br />00:37 Llama 3 in Python<br />04:25 Initializing e5 for Embeddings<br />05:56 Using Pinecone for RAG<br />07:24 Why We Concatenate Title and Content<br />10:15 Testing RAG Retrieval Performance<br />11:28 Initialize connection to Groq API<br />12:24 Generating RAG Answers with Llama 3 70B<br />14:37 Final Points on Why Groq Matters<br />...<br />https://www.youtube.com/watch?v=ne-lrm0n0bg]]></description><link>https://odysee.com/superfast-rag-with-llama-3-and-groq:52d97d47e98bad6bdf2b4d58f0c77b2097a979d4</link><guid isPermaLink="true">https://odysee.com/superfast-rag-with-llama-3-and-groq:52d97d47e98bad6bdf2b4d58f0c77b2097a979d4</guid><pubDate>Tue, 02 Jul 2024 00:00:00 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/superfast-rag-with-llama-3-and-groq/52d97d47e98bad6bdf2b4d58f0c77b2097a979d4/72b852.mp4" length="131601234" type="video/mp4"/><itunes:title>Superfast RAG with Llama 3 and Groq</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/ne-lrm0n0bg"/><itunes:duration>1008</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[NEW Pinecone Assistant]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/MPCiNA8BqO8" width="480" alt="thumbnail" title="NEW Pinecone Assistant" /></p>Pinecone Assistant is a new AI assistant service from Pinecone, bringing together the best of LLMs and GenAI with advanced Retrieval Augmented Generation (RAG) methods to reduce hallucination and optimize assistant reliability.<br /><br />🚩 Get Access:<br />https://www.pinecone.io/product/pinecone-assistant/<br /><br />📌 Code:<br />https://github.com/pinecone-io/examples/blob/master/learn/generation/pinecone-assistant/assistants-ai-demo.ipynb<br /><br />🌲 Subscribe for Latest Articles and Videos:<br />https://www.pinecone.io/newsletter-signup/<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />00:00 AI Assistants<br />00:41 Pinecone Assistants in Python<br />01:19 Building an AI Research Assistant<br />02:11 Assistant Message and Chat<br />03:05 Adding Files to the Assistant<br />05:30 Chatting with our Assistant<br />07:23 Assistant Chat History<br />10:47 Asking about Mamba 2<br />12:11 Wrapping up with Assistants<br />...<br />https://www.youtube.com/watch?v=MPCiNA8BqO8]]></description><link>https://odysee.com/new-pinecone-assistant:dcb7579ff92bbdb59df910299e86c5b1c14df7c1</link><guid isPermaLink="true">https://odysee.com/new-pinecone-assistant:dcb7579ff92bbdb59df910299e86c5b1c14df7c1</guid><pubDate>Tue, 25 Jun 2024 00:00:00 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/new-pinecone-assistant/dcb7579ff92bbdb59df910299e86c5b1c14df7c1/09f1e8.mp4" length="120448000" type="video/mp4"/><itunes:title>NEW Pinecone Assistant</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/MPCiNA8BqO8"/><itunes:duration>826</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[Semantic Chunking - 3 Methods for Better RAG]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/7JS0pqXvha8" width="480" alt="thumbnail" title="Semantic Chunking - 3 Methods for Better RAG" /></p>Semantic chunking allows us to build more context-aware chunks of information. We can use this for RAG, splitting video and audio, and much more.<br /><br />In this video, we will use a simple RAG-focused example. We will learn about three different types of chunkers: StatisticalChunker, ConsecutiveChunker, and CumulativeChunker.<br /><br />At the end, we also discuss semantic chunking for video, such as for the new gpt-4o and other multi-modal use cases.<br /><br />📌 Code:<br />https://github.com/aurelio-labs/semantic-chunkers/blob/main/docs/00-chunkers-intro.ipynb<br /><br />⭐️ Article:<br />https://www.aurelio.ai/learn/semantic-chunkers-intro<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#ai #artificialintelligence #chatbot #nlp <br /><br />00:00 3 Types of Semantic Chunking<br />00:42 Python Prerequisites<br />02:44 Statistical Semantic Chunking<br />04:38 Consecutive Semantic Chunking<br />06:45 Cumulative Semantic Chunking<br />08:58 Multi-modal Chunking<br />...<br />https://www.youtube.com/watch?v=7JS0pqXvha8]]></description><link>https://odysee.com/semantic-chunking-3-methods-for-better:9b22870638187020cfcdd1167e5a893e0d0f3e9b</link><guid isPermaLink="true">https://odysee.com/semantic-chunking-3-methods-for-better:9b22870638187020cfcdd1167e5a893e0d0f3e9b</guid><pubDate>Sat, 01 Jun 2024 00:00:00 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/semantic-chunking-3-methods-for-better/9b22870638187020cfcdd1167e5a893e0d0f3e9b/080a3f.mp4" length="114491449" type="video/mp4"/><itunes:title>Semantic Chunking - 3 Methods for Better RAG</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/7JS0pqXvha8"/><itunes:duration>612</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[Processing Videos for GPT-4o and Search]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/gxqdNl1nTYw" width="480" alt="thumbnail" title="Processing Videos for GPT-4o and Search" /></p>Recent multi-modal models like OpenAI's gpt-4o and Google's Gemini 1.5 models can comprehend video. When feeding video into these new models, we can push frames at a set frequency (for example, one frame every second) — but this method can be wildly inefficient and expensive.<br /><br />Fortunately, there is a better method called "semantic chunking." Semantic chunking is a common method used in text-based Retrieval-Augmented Generation (RAG), but we can apply the same logic to video using image embedding models. Using the similarity between these frames, we can effectively split videos based on the semantic meaning of the constituent frames.<br /><br />In this video, we'll explore how to use two test videos and chunk them into semantic blocks.<br /><br />📌 Code:<br />https://github.com/aurelio-labs/semantic-chunkers/blob/main/docs/01-video-chunking.ipynb<br /><br />📖 Article:<br />https://www.aurelio.ai/learn/video-chunking<br /><br />⭐ Repo:<br />https://github.com/aurelio-labs/semantic-chunkers<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#ai #artificialintelligence #openai <br /><br />00:00 Semantic Chunking<br />00:24 Video Chunking and gpt-4o<br />01:59 Video Chunking Code<br />03:28 Setting up the Vision Transformer<br />05:56 ViT vs. CLIP and other models<br />06:40 Video Chunking Results<br />08:37 Using CLIP for Vision Chunking<br />11:29 Final Conclusion on Video Processing<br />...<br />https://www.youtube.com/watch?v=gxqdNl1nTYw]]></description><link>https://odysee.com/processing-videos-for-gpt-4o-and-search:0ce196a80f1642916cd8e4f08fab10dec5194e6f</link><guid isPermaLink="true">https://odysee.com/processing-videos-for-gpt-4o-and-search:0ce196a80f1642916cd8e4f08fab10dec5194e6f</guid><pubDate>Tue, 21 May 2024 00:00:00 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/processing-videos-for-gpt-4o-and-search/0ce196a80f1642916cd8e4f08fab10dec5194e6f/99f7ee.mp4" length="189355246" type="video/mp4"/><itunes:title>Processing Videos for GPT-4o and Search</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/gxqdNl1nTYw"/><itunes:duration>767</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[NVIDIA's NEW AI Workbench for AI Engineers]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/DcrFL_zNRKM" width="480" alt="thumbnail" title="NVIDIA's NEW AI Workbench for AI Engineers" /></p>NVIDIA AI Workbench (NVWB) is a software toolkit designed to help AI engineers and data scientists build in GPU-enabled environments.<br /><br />Using NVWB, we can set up a local AI project with a prebuilt template with a few clicks. Then, after building out our project locally, we can quickly deploy it to a more powerful remote GPU instance, switch to a different remote, or go back to local.<br /><br />By abstracting away many repetitive and tedious boilerplate actions, NVWB aims to help AI engineers focus on the core of AI development. It helps us reduce time spent on managing our dev environment, deployments, or maintaining remote compute instances.<br /><br />In this tutorial, we'll learn about NVWB's features, where to use it, and how to use it.<br /><br />I'm using NVIDIA RTX 5000 Ada Generation Laptop GPU here from Dell Precision AI-ready workstations — enabling substantial performance across AI projects and streamlining both training and development phases while still keeping things lightweight for travel. More info here https://dell.com/precisionai<br /><br />📌 Download NVWB:<br />https://nvda.ws/4acZNRZ<br /><br />📖 Read the Article:<br />https://www.aurelio.ai/learn/ai-workbench-intro<br /><br />💻 Deploying remote EC2 instances for AI Workbench:<br />https://www.aurelio.ai/learn/ai-workbench-remote<br /><br />📌 Code:<br />https://github.com/NVIDIA/workbench-example-rapids-cudf/blob/main/code/cudf-pandas-demo.ipynb<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />00:00 NVIDIA AI Workbench<br />01:18 Installing AI Workbench<br />04:05 Sponsor Segment<br />05:46 AI Workbench Locations<br />06:54 Creating and Loading Projects<br />09:21 AI Workbench Projects<br />14:18 Jupyterlab in AI Workbench<br />17:46 Using CuDF and Pandas<br />19:51 Finishing up with AI Workbench<br /><br />#ai #artificialintelligence #chatbot #nlp  #ad<br />...<br />https://www.youtube.com/watch?v=DcrFL_zNRKM]]></description><link>https://odysee.com/nvidia%27s-new-ai-workbench-for-ai:3a0e941b135fbb8766360fc4011db2048d6cc2e0</link><guid isPermaLink="true">https://odysee.com/nvidia%27s-new-ai-workbench-for-ai:3a0e941b135fbb8766360fc4011db2048d6cc2e0</guid><pubDate>Thu, 16 May 2024 00:00:00 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/nvidia&apos;s-new-ai-workbench-for-ai/3a0e941b135fbb8766360fc4011db2048d6cc2e0/857855.mp4" length="343521618" type="video/mp4"/><itunes:title>NVIDIA&apos;s NEW AI Workbench for AI Engineers</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/DcrFL_zNRKM"/><itunes:duration>1324</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[Semantic Chunking for RAG]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/TcRRfcbsApw" width="480" alt="thumbnail" title="Semantic Chunking for RAG" /></p>Semantic chunking for RAG allows us to build more concise chunks for our RAG pipelines, chatbots, and AI agents. We can pair this with various LLMs and embedding models from OpenAI, Cohere, Anthropic, etc, and libraries like LangChain or CrewAI to build potentially improved Retrieval Augmented Generation (RAG) pipelines.<br /><br />📌 Code:<br />https://github.com/pinecone-io/examples/blob/master/learn/generation/better-rag/02b-semantic-chunking.ipynb<br /><br />🚩 Intro to Semantic Chunking:<br />https://www.aurelio.ai/learn/semantic-chunkers-intro<br /><br />🌲 Subscribe for Latest Articles and Videos:<br />https://www.pinecone.io/newsletter-signup/<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />00:00 Semantic Chunking for RAG<br />00:45 What is Semantic Chunking<br />03:31 Semantic Chunking in Python<br />12:17 Adding Context to Chunks<br />13:41 Providing LLMs with More Context<br />18:11 Indexing our Chunks<br />20:27 Creating Chunks for the LLM<br />27:18 Querying for Chunks<br /><br />#artificialintelligence #ai #nlp #chatbot #openai<br />...<br />https://www.youtube.com/watch?v=TcRRfcbsApw]]></description><link>https://odysee.com/semantic-chunking-for-rag:aa4c0c8c399ec625a86517a2c4ca24e5cc512d23</link><guid isPermaLink="true">https://odysee.com/semantic-chunking-for-rag:aa4c0c8c399ec625a86517a2c4ca24e5cc512d23</guid><pubDate>Sat, 04 May 2024 00:00:00 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/semantic-chunking-for-rag/aa4c0c8c399ec625a86517a2c4ca24e5cc512d23/9f8be4.mp4" length="122525742" type="video/mp4"/><itunes:title>Semantic Chunking for RAG</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/TcRRfcbsApw"/><itunes:duration>1795</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[LangGraph 101: it's better than LangChain]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/qaWOwbFw3cs" width="480" alt="thumbnail" title="LangGraph 101: it's better than LangChain" /></p>LangGraph is a special LangChain-built library that builds intelligent AI Agents using graphs. Ie, agentic state machines. It allows us to build more powerful and flexible AI agents than what we can build using just the core library, LangChain.<br /><br />In this video, we'll see how to build agents with LangGraph and OpenAI.<br /><br />📌 Code:<br />https://github.com/pinecone-io/examples/blob/master/learn/generation/langchain/langgraph/00-langgraph-intro.ipynb<br /><br />🌲 Subscribe for Latest Articles and Videos:<br />https://www.pinecone.io/newsletter-signup/<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />#ai #langchain #artificialintelligence #nlp #chatbot #openai <br /><br />00:00 Intro to LangGraph<br />00:52 Graphs in LangGraph<br />03:00 More Complex LangGraph Agent<br />08:12 LangGraph Graph State<br />14:00 LangGraph Agent Node<br />17:08 Forcing a Specific LLM Output<br />20:00 Building the Graph<br />23:23 Using our Agent Graph<br />28:32 LangGraph vs LangChain<br />...<br />https://www.youtube.com/watch?v=qaWOwbFw3cs]]></description><link>https://odysee.com/langgraph-101-it%27s-better-than-langchain:afa46debf8a6de45079f8d18d0fde57e7295589a</link><guid isPermaLink="true">https://odysee.com/langgraph-101-it%27s-better-than-langchain:afa46debf8a6de45079f8d18d0fde57e7295589a</guid><pubDate>Tue, 23 Apr 2024 00:00:00 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/langgraph-101-it&apos;s-better-than-langchain/afa46debf8a6de45079f8d18d0fde57e7295589a/ff0975.mp4" length="210747433" type="video/mp4"/><itunes:title>LangGraph 101: it&apos;s better than LangChain</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/qaWOwbFw3cs"/><itunes:duration>1945</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[AI Agent Evaluation with RAGAS]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/-_52DIIOsCE" width="480" alt="thumbnail" title="AI Agent Evaluation with RAGAS" /></p>RAGAS (RAG ASsessment) is an evaluation framework for RAG pipelines. Here, we see how to use RAGAS for evaluating an AI agent built using LangChain and using Anthropic's Claude 3, Cohere's embedding models, and the Pinecone vector database.<br /><br />📌 Code:<br />https://github.com/pinecone-io/examples/blob/master/learn/generation/better-rag/03-ragas-evaluation.ipynb<br /><br />📕 Article:<br />https://www.pinecone.io/learn/series/rag/ragas/<br /><br />🌲 Subscribe for Latest Articles and Videos:<br />https://www.pinecone.io/newsletter-signup/<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />00:00 RAG Evaluation<br />00:39 Overview of LangChain RAG Agent<br />03:04 RAGAS Code Prerequisites<br />03:40 Agent Output for RAGAS<br />05:14 RAGAS Evaluation Format<br />08:04 RAGAS Metrics<br />08:56 Understanding RAGAS Metrics<br />09:16 Retrieval Metrics<br />11:55 RAGAS Context Recall<br />14:43 RAGAS Context Precision<br />15:52 Generation Metrics<br />16:05 RAGAS Faithfulness<br />17:16 RAGAS Answer Relevancy<br />18:40 Metrics Driven Development<br /><br />#ai #artificialintelligence #nlp #chatbot #langchain<br />...<br />https://www.youtube.com/watch?v=-_52DIIOsCE]]></description><link>https://odysee.com/ai-agent-evaluation-with-ragas:0db30759b1f6d4601683ae9ef3f6cdbf840a33b8</link><guid isPermaLink="true">https://odysee.com/ai-agent-evaluation-with-ragas:0db30759b1f6d4601683ae9ef3f6cdbf840a33b8</guid><pubDate>Thu, 04 Apr 2024 00:00:00 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/ai-agent-evaluation-with-ragas/0db30759b1f6d4601683ae9ef3f6cdbf840a33b8/b44c51.mp4" length="314718753" type="video/mp4"/><itunes:title>AI Agent Evaluation with RAGAS</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/-_52DIIOsCE"/><itunes:duration>1181</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[Claude 3 Opus RAG Chatbot (Full Walkthrough)]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/rbzYZLfQbAM" width="480" alt="thumbnail" title="Claude 3 Opus RAG Chatbot (Full Walkthrough)" /></p>Claude 3 Opus is a state-of-the-art (SOTA) LLM from Anthropic. In this walkthrough, we'll see how to use Claude 3 Opus as a conversational AI agent with LangChain v1, using a Retrieval Augmented Generation (RAG) tool powered by Voyage AI embeddings and the Pinecone vector database.<br /><br />Putting all of these together, we have an extremely accurate AI RAG conversational agent.<br /><br />📌 Code:<br />https://github.com/pinecone-io/examples/blob/master/learn/generation/langchain/v1/claude-3-agent.ipynb<br /><br />🌲 Subscribe for Latest Articles and Videos:<br />https://www.pinecone.io/newsletter-signup/<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />00:00 Claude 3 AI Agent in LangChain<br />00:33 Finding Claude 3 RAG Code<br />01:35 Using Voyage AI Embeddings<br />02:25 Using Pinecone Knowledge Base for RAG<br />03:55 Claude 3 AI Agent Setup<br />09:19 Using Claude 3 Agent<br />10:17 Adding Conversational Memory<br />12:32 Testing Claude 3 Agent with Memory<br />14:40 Final Thoughts on AI Agents and Anthropic<br /><br />#ai #claude3 #artificialintelligence #anthropic #nlp #chatbot #langchain<br />...<br />https://www.youtube.com/watch?v=rbzYZLfQbAM]]></description><link>https://odysee.com/claude-3-opus-rag-chatbot-%28full:a28181de880bef9976527689a08ea4d5d7205cbb</link><guid isPermaLink="true">https://odysee.com/claude-3-opus-rag-chatbot-%28full:a28181de880bef9976527689a08ea4d5d7205cbb</guid><pubDate>Fri, 15 Mar 2024 00:00:00 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/claude-3-opus-rag-chatbot-(full/a28181de880bef9976527689a08ea4d5d7205cbb/d5d0c3.mp4" length="136928451" type="video/mp4"/><itunes:title>Claude 3 Opus RAG Chatbot (Full Walkthrough)</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/rbzYZLfQbAM"/><itunes:duration>943</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[Multi-Modal NSFW Detection with AI]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/EqKjaLrpeI4" width="480" alt="thumbnail" title="Multi-Modal NSFW Detection with AI" /></p>Using multi-modal models like OpenAI's CLIP we can use the Semantic Router library for detection of specific images or videos, for example the detection of Not Shrek For Work (NSFW) and Shrek For Work (SFW) images. In this video, we'll see how.<br /><br />⭐ GitHub Repo:<br />https://github.com/aurelio-labs/semantic-router/<br /><br />📌 Code:<br />https://github.com/aurelio-labs/semantic-router/blob/main/docs/07-multi-modal.ipynb<br /><br />🔥 Semantic Router Course:<br />https://www.aurelio.ai/course/semantic-router<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />00:00 AI Image Classification<br />00:23 How to use Multi-Modal AI<br />01:47 Finding Image Detection Notebook<br />02:18 Shrek Dataset<br />04:55 Creating Multi-Modal Routes<br />06:36 Testing NSFW Image Detection<br />07:53 Final Notes on Multi-Modal AI<br /><br />#ai #artificialintelligence #nlp #openai<br />...<br />https://www.youtube.com/watch?v=EqKjaLrpeI4]]></description><link>https://odysee.com/multi-modal-nsfw-detection-with-ai:5f6e4884f6faaea5422847385b3e0626266d97fb</link><guid isPermaLink="true">https://odysee.com/multi-modal-nsfw-detection-with-ai:5f6e4884f6faaea5422847385b3e0626266d97fb</guid><pubDate>Thu, 07 Mar 2024 00:00:00 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/multi-modal-nsfw-detection-with-ai/5f6e4884f6faaea5422847385b3e0626266d97fb/f201c3.mp4" length="98686444" type="video/mp4"/><itunes:title>Multi-Modal NSFW Detection with AI</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/EqKjaLrpeI4"/><itunes:duration>562</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[AI Decision Making — Optimizing Routes]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/Qi2_r4AopLM" width="480" alt="thumbnail" title="AI Decision Making — Optimizing Routes" /></p>AI decision-making can now be easily trained using the optimization methods available in semantic router.<br /><br />Route score thresholds define whether a route should be chosen. If the score we identify for any given route is higher than the Route.score_threshold, it passes; otherwise, it does not, and either another route is chosen or we return no route.<br /><br />Given that this one score_threshold parameter can define the choice of a route, it's important to get it right — but it's incredibly inefficient to do so manually. Instead, we can use the fit and evaluate methods of our RouteLayer. All we must do is pass a smaller number of (utterance, target route) examples to our methods, and with the fit, we will often see dramatically improved performance within seconds — we will see how to measure that performance gain with evaluation.<br /><br />⭐ GitHub Repo:<br />https://github.com/aurelio-labs/semantic-router/<br /><br />📌 Code:<br />https://github.com/aurelio-labs/semantic-router/blob/main/docs/06-threshold-optimization.ipynb<br /><br />🔥 Semantic Router Course:<br />https://www.aurelio.ai/course/semantic-router<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br />...<br />https://www.youtube.com/watch?v=Qi2_r4AopLM]]></description><link>https://odysee.com/ai-decision-making-%E2%80%94-optimizing-routes:ee08811712497dd02426f1bf00eb707a79ad303a</link><guid isPermaLink="true">https://odysee.com/ai-decision-making-%E2%80%94-optimizing-routes:ee08811712497dd02426f1bf00eb707a79ad303a</guid><pubDate>Tue, 27 Feb 2024 00:00:00 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/ai-decision-making-—-optimizing-routes/ee08811712497dd02426f1bf00eb707a79ad303a/6a6592.mp4" length="95430929" type="video/mp4"/><itunes:title>AI Decision Making — Optimizing Routes</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/Qi2_r4AopLM"/><itunes:duration>644</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[Steerable AI with Pinecone + Semantic Router]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/qjRrMxT20T0" width="480" alt="thumbnail" title="Steerable AI with Pinecone + Semantic Router" /></p>We can make AI steerable and predictable using Semantic Router. How much fine-grained control we need will adjust the scale required by our routes. At very large scales, it can be useful to use a vector database to store and search through your route vector space. In this walkthrough, we will see how to use the new Pinecone integration in Semantic Router.<br /><br />⭐ GitHub Repo:<br />https://github.com/aurelio-labs/semantic-router/<br /><br />📌 Code:<br />https://github.com/aurelio-labs/semantic-router/blob/main/docs/examples/pinecone-and-scaling.ipynb<br /><br />🌲 Subscribe for Latest Articles and Videos:<br />https://www.pinecone.io/newsletter-signup/<br /><br />🔥 Semantic Router Course:<br />https://www.aurelio.ai/course/semantic-router<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br /><br />00:00 Pinecone and Semantic Router<br />01:53 Finding Code for Pinecone<br />04:12 Getting Routes from Hugging Face<br />07:36 Loading Route Layers from Pinecone<br /><br />#ai #artificialintelligence #nlp #chatbot<br />...<br />https://www.youtube.com/watch?v=qjRrMxT20T0]]></description><link>https://odysee.com/steerable-ai-with-pinecone-%2B-semantic:8820bf4bfe9c7c6feb932cea8c8c25f9a7d584ca</link><guid isPermaLink="true">https://odysee.com/steerable-ai-with-pinecone-%2B-semantic:8820bf4bfe9c7c6feb932cea8c8c25f9a7d584ca</guid><pubDate>Wed, 21 Feb 2024 00:00:00 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/steerable-ai-with-pinecone-+-semantic/8820bf4bfe9c7c6feb932cea8c8c25f9a7d584ca/d71ac5.mp4" length="116172597" type="video/mp4"/><itunes:title>Steerable AI with Pinecone + Semantic Router</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/qjRrMxT20T0"/><itunes:duration>690</itunes:duration><itunes:explicit>false</itunes:explicit></item><item><title><![CDATA[OpenAI's Sora: Incredible AI Generated Video]]></title><description><![CDATA[<p><img src="https://thumbnails.lbry.com/F-BuJId6cK4" width="480" alt="thumbnail" title="OpenAI's Sora: Incredible AI Generated Video" /></p>Taking a look at the new text-to-video diffusion model, Sora, from OpenAI — it is truly incredible.<br /><br />OpenAI Blog Post:<br />https://openai.com/sora<br /><br />👋🏼 AI Consulting:<br />https://aurelio.ai<br /><br />👾 Discord:<br />https://discord.gg/c5QtDB9RAP<br /><br />Twitter: https://twitter.com/jamescalam<br />LinkedIn: https://www.linkedin.com/in/jamescalam/<br />...<br />https://www.youtube.com/watch?v=F-BuJId6cK4]]></description><link>https://odysee.com/openai%27s-sora-incredible-ai-generated:12ead13d652b907b7e66bb435810317faa1ba0d4</link><guid isPermaLink="true">https://odysee.com/openai%27s-sora-incredible-ai-generated:12ead13d652b907b7e66bb435810317faa1ba0d4</guid><pubDate>Thu, 15 Feb 2024 00:00:00 GMT</pubDate><enclosure url="https://player.odycdn.com/api/v3/streams/free/openai&apos;s-sora-incredible-ai-generated/12ead13d652b907b7e66bb435810317faa1ba0d4/7f305d.mp4" length="282090325" type="video/mp4"/><itunes:title>OpenAI&apos;s Sora: Incredible AI Generated Video</itunes:title><itunes:author>James Briggs</itunes:author><itunes:image href="https://thumbnails.lbry.com/F-BuJId6cK4"/><itunes:duration>708</itunes:duration><itunes:explicit>false</itunes:explicit></item></channel></rss>