Build Your First LangChain Application – Part 3: Understanding Core Concepts and Components

Build Your First LangChain Application – Part 3: Understanding Core Concepts and Components

Now that we have our Azure OpenAI connection working, let’s dive deep into LangChain’s core components and understand how they work together to create powerful AI applications.

Core LangChain Components

1. Language Models (LLMs)

Language Models are the foundation of LangChain applications. They provide the AI capabilities for text generation, completion, and understanding.

// Advanced LLM configuration
import { AzureChatOpenAI } from "@langchain/openai";

const advancedLLM = new AzureChatOpenAI({
  azureOpenAIApiKey: process.env.AZURE_OPENAI_API_KEY,
  azureOpenAIApiInstanceName: "your-instance",
  azureOpenAIApiDeploymentName: "gpt-4",
  azureOpenAIApiVersion: "2024-02-15-preview",
  temperature: 0.3,           // Lower for more focused responses
  maxTokens: 1000,            // Limit response length
  topP: 0.9,                  // Nucleus sampling parameter
  frequencyPenalty: 0.1,      // Reduce repetition
  presencePenalty: 0.1,       // Encourage diverse vocabulary
});

2. Prompts and Prompt Templates

Prompt Templates allow you to create reusable, parameterized prompts for consistent AI interactions.

import { PromptTemplate, ChatPromptTemplate } from "@langchain/core/prompts";

// Simple prompt template
const simpleTemplate = new PromptTemplate({
  template: "Explain {topic} in simple terms for a {audience}.",
  inputVariables: ["topic", "audience"],
});

// Chat prompt template with system and human messages
const chatTemplate = ChatPromptTemplate.fromMessages([
  ["system", "You are an expert {role} with {years} years of experience."],
  ["human", "Please explain {concept} and provide {examples} practical examples."],
]);

3. Chains

Chains allow you to combine multiple components to create complex workflows.

import { LLMChain } from "langchain/chains";

export class ChainService {
  async createContentChain() {
    const promptTemplate = new PromptTemplate({
      template: `
        Create a {content_type} about {topic}.
        Target audience: {audience}
        Tone: {tone}
        Length: {length}
        
        Content:
      `,
      inputVariables: ["content_type", "topic", "audience", "tone", "length"],
    });

    const chain = new LLMChain({
      llm: this.llm,
      prompt: promptTemplate,
    });

    return chain;
  }
}

4. Memory

Memory components allow your application to maintain context across conversations.

import { BufferMemory, ConversationSummaryMemory } from "langchain/memory";
import { ChatMessageHistory } from "langchain/stores/message/in_memory";

export class MemoryService {
  createBufferMemory(sessionId) {
    const memory = new BufferMemory({
      memoryKey: "chat_history",
      chatHistory: new ChatMessageHistory(),
      returnMessages: true,
      k: 10, // Keep last 10 messages
    });
    
    return memory;
  }
}

Practical Example: Question & Answer Chain

// src/services/questionAnswerService.js
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
import { RunnableSequence } from "@langchain/core/runnables";

export class QuestionAnswerService {
  constructor() {
    this.llm = llm;
    this.setupChain();
  }

  setupChain() {
    const qaPrompt = ChatPromptTemplate.fromMessages([
      ["system", `You are a helpful assistant that answers questions based on provided context.
       If you don't know the answer based on the context, say so clearly.`],
      ["human", `Context: {context}
       Question: {question}
       Answer:`],
    ]);

    this.qaChain = RunnableSequence.from([
      qaPrompt,
      this.llm,
      new StringOutputParser(),
    ]);
  }

  async answerQuestion(context, question) {
    const response = await this.qaChain.invoke({
      context,
      question,
    });
    return response;
  }
}

Key Takeaways

  • Modularity: LangChain promotes building modular, reusable components
  • Flexibility: Easy to swap different LLMs or modify chain logic
  • Scalability: Components can be combined to create complex workflows
  • Maintainability: Clear separation of concerns makes code easier to maintain

In Part 4, we’ll explore document loading and text processing, setting the foundation for our document analysis system.

Written by:

191 Posts

View All Posts
Follow Me :
How to whitelist website on AdBlocker?

How to whitelist website on AdBlocker?

  1. 1 Click on the AdBlock Plus icon on the top right corner of your browser
  2. 2 Click on "Enabled on this site" from the AdBlock Plus option
  3. 3 Refresh the page and start browsing the site