Try

Technical Details of an AI CMS

Photo of Sara Williams

Sara Williams

AI is set to be a catalyst for redefining technology as we know it. For content management, it’s reshaping CMS architectures, making them more dynamic and effective in delivering content-rich experiences more efficiently and with more scale. 

With AI-enhanced content creation, real-time recommendations, and more, “AI-powered CMS platforms enable businesses to deliver highly targeted and adaptive customer journeys across digital channels.”

But what needs to be under the hood to make these AI CMSs work?

What Developers Care About

Many enterprise applications have embraced AI by placing a UI interface (wrapper) on top of ChatGPT, Gemini, LLama or any another LLM. While this is a step in the right direction, developers need an architectural approach for enabling generative AI (genAI) within content authoring workflows and a way to organize their AI applications. 

For content authors and other business stakeholders, the LLM needs to support them in handling tedious tasks and generating ideas. But for developers, there is a risk of security and scalability issues without the architectural foundation.  

The Inner Workings of an AI CMS

Here’s how an AI-powered CMS can deliver what you need. 

The Foundational AI CMS Architecture

AI-native capabilities must be embedded into any CMS at each layer, providing the foundational capabilities required to fully leverage artificial intelligence in content management. Bolting on loose integrations with LLMs and other standalone AI tools will not suffice.

As shown in the figure below, AI services should reside at each of a CMS's three distinct layers:

AI CMS Services

  • Data and Services: The base layer provides core AI capabilities, services, and data stores such as access to LLMs (like ChatGPT, Gemini, LLama, etc.), vector stores, stable diffusion services, speech-to-text and text-to-speech services, and vision and recognition systems. 
  • Orchestration Layer: Here, you’ll find frameworks for coordinating and orchestrating AI workloads. The layer has three primary objectives: abstracting interactions with the services and stores in the layer below, providing pluggable templates for common boilerplate usage patterns like the RAG pattern, and providing a place for our domain-specific actions. Common AI orchestration frameworks include SpringAI and Langchain, among many others.
  • AI Apps Layer: This is where end users interact with AI assistants, AI agents and custom AI apps for both content authoring experiences and site/app visitor experiences. 

Improving AI Responses with RAG

To improve the quality of AI responses, companies and CMS users need to make them more specific to their needs and reduce the risk of a low-quality or incorrect response. Other than fine-tuning and training, one approach is to use retrieval augmented generation (RAG). Here’s how it works in an AI CMS. 

  1. We'll pass the user's prompt through our orchestration layer by using OpenSearch as the vector store and OpenAI as the LLM.
  2. The orchestration layer will use an out-of-the-box template called the Retrieval Augmentation Advisor to automatically coordinate the entire sequence of events and process the prompt into embeddings. It will then help retrieve similar documents from the vector store, which can be used to enhance the prompt.
  3. That enhanced prompt and context are then passed to the LLM to be processed. Once the LLM has generated a response, it can be returned or streamed to the end user as it’s being generated.

AI CMS RAG Step 1

The content delivery side assumes that the vector store, in this case, OpenSearch, is already populated with relevant content. The embeddings for the content stored in the vector store are generated during authoring and publishing activities. However, this happens out of band with the delivery. 

AI CMS RAG Deployment Processor

The Deployer plays a role in the orchestration layer with a RAG deployment processor. 

  1. This processor calculates what the user has changed and published regarding the content. It then prepares that content as simple documents suitable for processing into embeddings. 
  2. The deployment processor then uses the LLM, in this case to generate embeddings for the documents.
  3. The deployment processor then takes those embeddings and updates the vector store with new and updated vectors or removes vectors if they are related to deleted content. 

Vector stores are only updated after content is approved and published in the live environment.

Tools and Function Calling

If we send a prompt to the LLM that requests an action, for example, booking a hotel room, even with RAG, the LLM is not going to know how to place that order. The best it will be able to do is suggest the steps to place the order manually, or maybe provide us with some code that we can use to execute and place the order in the context of an application. 

Function callbacks give an AI agent the ability to take actions using tools. For example, you can say “Book me a room” as a command. Instead of a response with instructions on booking a room, the LLM will do it for you by using function callbacks. 

These allow us to register and describe a capability in terms of what it does, what input it requires, and how it should respond to the LLM. Once we’ve described all this, we can pass the function callback and its metadata into our prompt that we send to the LLM. 

AI CMS Tools and Function Calling

  1. The LLM will then determine if we’re asking for it to take an action and if it has a tool that is appropriate to take the action or not. If it has such a tool, we’ll then confirm or acquire the information it needs in order to invoke the action
  2. The LLM then calls back to the orchestration layer and invokes the function callback.
  3. The orchestration layer will then invoke an underlying API in one of our back end systems and acquire the results from that API and then forward it to the LLM, as we can see in step three
  4. The LLM can then leverage that result to formulate a response for the user, as we see in step four, and respond back to the user with that answer

On the authoring side, we can then perform tasks such as drafting content, creating/updating content models and UI templates, publishing or reverting content, and any other content authoring activity.

Building AI Agents

Agents are AI apps that can act autonomously to fulfill a complex goal that may require multiple steps that need to be planned and executed. 

Function callbacks are necessary for agents to have agency. However, in addition to these types of tools, agents, unlike an AI assistant, will need some additional context. Our orchestration layer, along with the LLMs and other services and stores in the layer below, needs to provide support for long-lived AI-powered background processes that define the lifespan of an agent and its tasks. 

AI CMS Agent Architecture

In this context, AI agents can act autonomously on behalf of users, guided by their instructions in the form of objectives, task parameters, and guardrails that define the desired outcome and scope of the tasks. This gives them the ability to carry out complex tasks such as moderating comments, maybe writing and scheduling content, or identifying and updating outdated content on the authoring side. On the delivery side, the task might be to help facilitate and guide an end user through a complex workflow or other aspect of their customer journey.

CrafterCMS: A Foundation for AI-Powered Content Management and Digital Experiences

CrafterCMS offers native support at the orchestration layer, by bundling general frameworks like SpringAI. This provides flexibility at the data layer along with pre-built templates for AI-driven tasks, enhanced with specialized capabilities and SDKs tailored specifically for CrafterCMS and content management.

These integrations extend into the application layer, offering SDKs for common user interfaces and various interaction modes. Enterprises can leverage these tools to streamline the creation of authoring plugins or to enhance visitor experiences across your sites and apps on the content delivery front.

At the data and services tier, CrafterCMS remains technology-agnostic while enabling generative AI use cases through out-of-the-box integrations with leading LLMs like ChatGPT. CrafterCMS also integrates OpenSearch within its content authoring and delivery environments, offering robust search indexing capabilities and built-in support as a vector database for managing embeddings in AI-powered workflows and RAG pipelines.

Register for a Free Trial of CrafterCMS today to see more of these AI capabilities in action.

Related Posts

Related Resources