prompt engineering and generative AI

Prompt Engineering or Text Tinkering: What are we co-creating with GPT models?

TLDR; I can’t wait until universities start offering degrees in “Prompt Engineering” (troll) — If that happens, I’ll know the GenAI hype cycle has reached the top. Kidding aside, it is important to think about our interactions with GenAI because in a few years these applications will be as ubiquitous and as much a part of our daily lives as Google and Social Media are currently.

The field of text generation, has taken a giant leap forward with the introduction of models like GPT-3.5 turbo and GPT-4. The journey from basic rule-based systems to advanced AI models like GPT-4 represents a rapid evolution. Initially, language models were limited by rigid syntax rules and lacked contextual understanding. With the advent of large language models, we now see that AI can convincingly generate human-like text for more contextual and personalized responses.

In this post, I’ll discuss the concept of Prompt Engineering, explore its intricacies, and how we can better work with GenAI as a collaborator.

Prompt Engineering Defined

Prompt Engineering is a fancy way of saying that we as humans can design inputs (prompts) to elicit the most effective and accurate outputs from a large language model. Think of it as a dialogue with a highly intelligent machine, where the quality of your question significantly influences the response. That being the case, the inverse is also true. Poor prompts tend to result in weak responses from LLMs.

Good vs. Bad Prompt Examples

  • Good Prompt: “Explain the concept of prompt engineering in simple terms. Use language that would make sense to a 5 year old.”
  • Bad Prompt: “Prompt engineering?”

The good prompt is specific and clear, guiding the AI to provide a detailed, understandable explanation. The bad prompt is vague and open-ended, leading to potentially irrelevant or incomplete responses.

Invoking Large Language Models (LLMs)

GPT-3.5 Turbo and GPT-4

If you’ve spent any time playing with ChatGPT, you are already familiar with GPT-3.5 Turbo and GPT-4, the language models developed by OpenAI. They can understand and generate human-like text, making them valuable for various applications like content generation, translation, and more. Here are a few tasks where GPT models shine:

  • Content Generation: Creating stories, articles, or even poetry.
  • Translation: Accurately translating between languages.
  • Task Automation: Summarizing texts, generating code, etc.

Practical Use Cases

Content Generation for Marketing

Utilizing GPT models for content creation in marketing can save time and resources. For instance, generating product descriptions, email campaigns, or social media posts. You can even use the same models to produce variants of the content — including localizing language so your content has a translation that works for your audiences world-wide. Example:

prompt = "Write a catchy product description for a smartwatch focusing on health features."

Text Summarization

GPT models are great at summarizing content and extracting key points. They excel at this task because of their deep learning foundations and understanding of language nuances.

How GPT Models Summarize Text:

  1. Understanding Context: GPT models are trained on vast datasets, allowing them to understand context and extract key points from a text.
  2. Generating Summary: The model then applies its learned knowledge to generate a concise and coherent summary of the input text.

Embeddings and Vector Databases

Embeddings are numerical representations of words or phrases in a high-dimensional space. They capture the semantic meaning of the text, allowing the model to understand context and relationships between words. The GPT model uses these embeddings to understand the text’s content and its underlying meaning, which is crucial for accurate summarization.

Vector Databases store embeddings in a structured format, allowing for efficient retrieval and comparison of text embeddings. They are used to query similar texts, find relationships, or retrieve information based on semantic similarity.

For summarization, a vector database can be used to compare different parts of the text. The model can identify the most relevant and important sections (based on their embeddings) to include in the summary.

Prompt engineering is not just about communicating with an AI; it’s about co-creating with it. I’ve been leveraging LLM capabilities for just shy of a year now and I’m seeing a wide range of use cases. In a recent engagement I’ve been using LangChain to do intent recognition and automate various business processes. How are you using GenAI to co-create?