Prompt engineering for AI language models
Asking Large Language Models the right questions
AI language models, such as those used in ChatGPT, can fundamentally change the way technical documentation is created and used. They will make the daily work of technical communicators easier by assisting with or taking over routine tasks. This includes automatically creating draft texts and summaries, checking writing rules or adapting texts for different audiences.
There are also new ways to deliver documentation. Methods like Retrieval Augmented Generation (RAG) can be used to deploy chatbots on websites. These chatbots combine the knowledge of documentation with the flexibility of AI language models. This allows simple support questions to be answered quickly and easily.
To effectively use AI language models for technical documentation, it is important to give them the right prompts – with the help of prompt engineering. Since AI models such as ChatGPT are not perfect and do not always deliver the desired results immediately, these prompts often need to be optimized. Good prompt engineering allows you to better control the behavior of your AI. This gives you good, stable results and supports your authoring processes with AI.
We support you with prompt engineering
parson helps you to improve the input (prompts) for AI language models to generate text for technical documentation or answers for users. We help you standardize these prompts and break them down into building blocks that your team can easily use. This allows you to get the most out of different AI applications. This is how we work.
Your contacts
Team Prompt engineering
Prompt engineering in technical documentation. This is how we work
- Clarify output requirements. We analyze what you want to achieve with an AI language model. For example, what kind of text should be written, what kind of authoring should be supported? Or do you want to optimize the responses of an AI chatbot?
- Clarify the requirements for the LLM. What type of LLM is right for your business? Which will deliver the best results? How will your data be protected?
- Define personas. How will the answers be presented? Who is the audience for the output?
- Optimize prompts through structured testing. Prompt engineering is an iterative process that requires constant adaptation and improvement. Based on your requirements, we create a test plan and use the knowledge gained to optimize your prompts.
- Modularize and standardize prompts. We create standardized prompts and prompt modules that your teams can use as templates.
Learn more about prompt engineering in our FAQs.
FAQs – Frequently asked questions about prompt engineering
What is prompt engineering?
Prompt engineering is a term that emerged with the rise of Large Language Models (LLMs), such as those used by ChatGPT. "Prompt" is the input a user makes in the input field of an LLM. "Engineering" is the continuous adaptation and improvement of that input to generate better texts and responses.
Prompt engineering is therefore the optimization of your "conversation" with an AI language model. You need to ask the right questions to get the right answers, just like in an interview.
What are typical prompt engineering techniques?
The following are typical prompt engineering techniques:
- Assign a persona to the chatbot. ("Write as if you were a technical communicator. I'll give you the basic steps of an app functionality, and you write an engaging article on how to perform those basic steps.")
- Define the context. In the prompt, define the goal of the output. Who is the audience for the content? Where and how will the output be published?
- Use examples. Provide existing documentation as a reference or point to examples on the Web.
- Start with general answers and then get more specific. The AI language model remembers the flow of the chat and can incorporate your previous requests.
- Within the prompt, provide examples of the desired task, such as question-answer pairs, text translations, or other tasks (few-shot prompting).
What is the difference between AI, a Large Language Model and ChatGPT?
- AI is a branch of computer science. AI applications simulate human intelligence in technical systems to make them profitable and beneficial.
- Large Language Models (LLMs) are artificial neural networks that are trained by machine learning on large amounts of text and code data. The data comes from a variety of sources, including books, articles, websites, and code repositories. LLMs create their own text based on this training data.
- ChatGPT is a chatbot by OpenAI based on an advanced LLM.
How will prompt engineering benefit my business?
Generative AI is changing the way documentation is written and delivered. Vast amounts of text can be created in a fraction of the time that it used to take. In the same way, an AI-powered chatbot can respond to customer queries in a fast and flexible way, taking the pressure off your support team.
But the quality of the text is critical. Documentation must be accurate, legally compliant, and correct. If you can get your LLM to generate appropriate content based on your company's style and corporate identity, AI can relieve you of repetitive tasks and speed up various processes:
- Create documentation for a new product based on a previous product model
- Update documentation based solely on input from developer-written release notes
- Convert your documentation to a new publication format, such as AsciiDoc or DITA
- Implement a chatbot that answers questions about your products based on your documentation
- Summarize large documents and topics