Skip to content

AI & LLM Glossary

Key terms and concepts in prompt engineering, LLM operations, and AI application development — explained clearly for developers and teams.

All terms (36)

AI Guardrails

Safety constraints, behavioral boundaries, and policy enforcement mechanisms applied to AI systems to prevent harmful outputs, ensure compliance, and maintain alignment with organizational values.

Security

Chain-of-Thought Prompting

A prompting technique that instructs the LLM to break down complex problems into intermediate reasoning steps before producing a final answer, significantly improving accuracy on multi-step tasks.

Prompt Engineering

Context Window

The maximum number of tokens (input plus output) that an LLM can process in a single request, which determines how much information can be included in a prompt and response.

Architecture

Draft/Published Workflow

A two-state prompt lifecycle where prompts exist as editable drafts during development and become immutable published versions when promoted to production, separating work-in-progress from production-ready content.

LLM Ops

Environment-Scoped Prompts

A deployment strategy where the same prompt identifier serves different versions depending on the requesting environment — development, staging, or production — enabling safe testing without affecting live users.

LLM Ops

Few-Shot Prompting

A prompting technique where one or more input-output examples are included in the prompt to demonstrate the desired behavior, format, or reasoning pattern for the LLM to follow.

Prompt Engineering

Guardrails

Safety constraints and behavioral boundaries embedded in prompts or applied as post-processing layers to prevent LLMs from generating harmful, off-topic, or policy-violating outputs.

Security

LLM Evaluation

The systematic process of measuring the quality, accuracy, safety, and reliability of LLM outputs against defined criteria, using automated metrics, human review, or model-based judging.

LLM Ops

LLMOps

The set of practices, tools, and workflows for operationalizing large language model applications in production, covering prompt management, evaluation, monitoring, cost control, and reliability.

LLM Ops

Model-Agnostic Prompts

Prompts designed to produce consistent, high-quality results across different LLM providers and model versions, reducing vendor lock-in and enabling flexible model selection.

Architecture

Prompt A/B Testing

The practice of running two or more prompt variants simultaneously on live traffic to statistically determine which version produces better outcomes against defined metrics.

LLM Ops

Prompt API

A REST or HTTP interface that allows applications to fetch, manage, and deliver prompts programmatically, decoupling prompt content from application code and enabling runtime updates without redeployment.

Architecture

Prompt Blocks

Typed, independently editable sections that compose a structured prompt, where each block has a designated purpose such as role definition, context, instructions, guardrails, or output format.

Prompt Engineering

Prompt Caching

The practice of storing and reusing LLM responses for identical or semantically similar prompt inputs, reducing latency and cost by avoiding redundant model calls.

Architecture

Prompt Chaining

A technique where multiple prompts are connected in sequence, with the output of one prompt serving as input to the next, enabling complex multi-step workflows that exceed the capability of a single prompt.

Prompt Engineering

Prompt Collaboration

The practice of multiple stakeholders — prompt engineers, product managers, domain experts, and developers — working together on prompt development through shared tooling, review workflows, and role-based access.

LLM Ops

Prompt Compilation

The process of assembling structured prompt blocks — role, context, instructions, guardrails, output format — into a single prompt string, including ordering, formatting, and variable interpolation.

Architecture

Prompt Deployment

The process of promoting a tested and approved prompt version from a development or staging state to production, making it available to live applications through an API or delivery mechanism.

LLM Ops

Prompt Engineering

The discipline of designing, structuring, and iterating on instructions given to LLMs to elicit accurate, consistent, and useful outputs for specific use cases.

Prompt Engineering

Prompt Evaluation

The process of measuring prompt quality against defined criteria such as accuracy, relevance, safety, and format compliance, distinct from broader LLM evaluation by focusing specifically on how well the prompt elicits desired model behavior.

LLM Ops

Prompt Governance

The set of policies, controls, and processes that organizations implement to manage prompt changes at scale, ensuring consistency, compliance, and accountability across teams and applications.

Security

Prompt Injection

A security attack where malicious input is crafted to override or manipulate an LLM's system prompt, causing the model to ignore its instructions and perform unintended actions.

Security

Prompt Lifecycle

The complete set of stages a prompt goes through from initial authoring and iteration, through testing and review, to deployment in production, and ongoing monitoring and refinement.

LLM Ops

Prompt Management

The practice of organizing, versioning, testing, and deploying LLM prompts through a centralized platform rather than embedding them directly in application code.

LLM Ops

Prompt Optimization

The iterative process of refining prompts to maximize output quality, consistency, and efficiency, typically through systematic testing, evaluation, and data-driven adjustments.

Prompt Engineering

Prompt Registry

A centralized catalog of all prompts within an organization, providing a single source of truth for discovery, access control, and operational visibility across teams and applications.

LLM Ops

Prompt Rollback

The ability to revert a production prompt to a previously published version when issues are detected, providing a rapid recovery mechanism that does not require application code changes.

LLM Ops

Prompt Template

A reusable prompt structure containing variable placeholders (e.g., {{user_name}}, {{context}}) that are dynamically filled at runtime, enabling the same prompt to serve different inputs and scenarios.

Prompt Engineering

Prompt Testing

The systematic validation of prompt behavior before deployment, using test cases, automated assertions, and evaluation criteria to catch regressions and verify that prompts meet quality standards.

LLM Ops

Prompt Versioning

The practice of maintaining a complete history of changes to LLM prompts, enabling teams to compare versions, roll back to previous states, and manage environment-specific deployments.

LLM Ops

Retrieval-Augmented Generation (RAG)

An architecture pattern that enhances LLM responses by retrieving relevant documents from an external knowledge base and including them in the prompt as context before generating a response.

Architecture

Structured Prompts

Prompts organized into typed, labeled sections or blocks — such as role, context, instructions, guardrails, and output format — rather than written as a single continuous block of text.

Prompt Engineering

System Prompt

A special instruction provided to an LLM at the beginning of a conversation that defines its behavior, personality, constraints, and output format for the entire session.

Prompt Engineering

Token Optimization

Techniques for reducing the number of tokens consumed by prompts and responses while maintaining output quality, directly lowering costs and improving response latency in LLM applications.

Prompt Engineering

Variable Interpolation

The process of replacing placeholder tokens (such as {{variable_name}}) in a prompt template with actual runtime values, enabling dynamic prompts that adapt to different users, contexts, and inputs.

Prompt Engineering

Zero-Shot Prompting

A prompting approach where the LLM is given only instructions and context without any input-output examples, relying entirely on the model's pre-trained knowledge to perform the task.

Prompt Engineering

Prompt Engineering

LLM Ops

Security

Architecture