Tuesday, 1 July 2025

Building RAG Agents with LLMs

 


About this Course

The evolution and adoption of large language models (LLMs) have been nothing short of revolutionary, with retrieval-based systems at the forefront of this technological leap. These models are not just tools for automation; they are partners in enhancing productivity, capable of holding informed conversations by interacting with a vast array of tools and documents. This course is designed for those eager to explore the potential of these systems, focusing on practical deployment and the efficient implementation required to manage the considerable demands of both users and deep learning models. As we delve into the intricacies of LLMs, participants will gain insights into advanced orchestration techniques that include internal reasoning, dialog management, and effective tooling strategies.

Learning Objectives

The goal of the course is to teach participants how to:

Compose an LLM system that can interact predictably with a user by leveraging internal and external reasoning components.

Design a dialog management and document reasoning system that maintains state and coerces information into structured formats.

Leverage embedding models for efficient similarity queries for content retrieval and dialog guardrailing.

Implement, modularize, and evaluate a RAG agent that can answer questions about the research papers in its dataset without any fine-tuning.

By the end of this workshop, participants will have a solid understanding of RAG agents and the tools necessary to develop their own LLM applications.

Topics Covered

The workshop includes topics such as LLM Inference Interfaces, Pipeline Design with LangChain, Gradio, and LangServe, Dialog Management with Running States, Working with Documents, Embeddings for Semantic Similarity and Guardrailing, and Vector Stores for RAG Agents. Each of these sections is designed to equip participants with the knowledge and skills necessary to develop and deploy advanced LLM systems effectively.

Course Outline

Introduction to the workshop and setting up the environment.

Exploration of LLM inference interfaces and microservices.

Designing LLM pipelines using LangChain, Gradio, and LangServe.

Managing dialog states and integrating knowledge extraction.

Strategies for working with long-form documents.

Utilizing embeddings for semantic similarity and guardrailing.

Implementing vector stores for efficient document retrieval.

Evaluation, assessment, and certification.

Free Courses : Building RAG agents with LLMs


0 Comments:

Post a Comment

Popular Posts

Categories

100 Python Programs for Beginner (118) AI (161) Android (25) AngularJS (1) Api (6) Assembly Language (2) aws (27) Azure (8) BI (10) Books (254) Bootcamp (1) C (78) C# (12) C++ (83) Course (84) Coursera (299) Cybersecurity (28) Data Analysis (24) Data Analytics (16) data management (15) Data Science (225) Data Strucures (14) Deep Learning (75) Django (16) Downloads (3) edx (21) Engineering (15) Euron (30) Events (7) Excel (17) Finance (9) flask (3) flutter (1) FPL (17) Generative AI (48) Git (6) Google (47) Hadoop (3) HTML Quiz (1) HTML&CSS (48) IBM (41) IoT (3) IS (25) Java (99) Leet Code (4) Machine Learning (197) Meta (24) MICHIGAN (5) microsoft (9) Nvidia (8) Pandas (12) PHP (20) Projects (32) Python (1219) Python Coding Challenge (898) Python Quiz (348) Python Tips (5) Questions (2) R (72) React (7) Scripting (3) security (4) Selenium Webdriver (4) Software (19) SQL (45) Udemy (17) UX Research (1) web application (11) Web development (7) web scraping (3)

Followers

Python Coding for Kids ( Free Demo for Everyone)