Practical applicability is paramount in every feature we develop within our software. That’s why we sometimes choose a contrarian approach and don’t always follow the latest trends. The same goes for features we develop around AI (artificial intelligence).
We always apply the following principles for our development process:
- A feature must add value for the customer.
- The privacy of customers and their clients must be safeguarded.
- It must comply with EU regulations from day one.
And we try to incorporate as much flexibility as possible so that it aligns with the customer in the best possible way.
We use the term AI because customers specifically ask for AI. But we prefer to talk about the functionality it enables, such as processing emails in the background or translating a text. That’s ultimately what matters to the customer.
The first principle, adding value for customers, can lead to conflicting situations when it comes to AI. On the one hand, it helps perform tasks, but at the same time, you want to retain control over the process as a customer and ensure that employees understand what is happening.
The risk of knowledge loss is significant when employees rely blindly on AI, as MIT recently demonstrated in a study. And when employees lose knowledge of a process, you don’t reach maximum value.
That’s why we focus on features where AI acts as an assistant or takes over tasks that humans generally carry out on autopilot.
If a customer sends an email stating they will pay tomorrow, we want MA!N to process that email in the background. An employee adds little value there.
When it’s time for an important decision, MA!N prepares an action for the employee so that a human can weigh the options. And MA!N includes a built-in sampling function to audit the AI.
That control goes beyond the actions MA!N performs. Customers also want to maintain control over data and privacy.
We achieve control by managing the underlying technology. We work with local AI models within our own cloud instead of using services like ChatGPT.
What does this mean?
At CE-iT we choose Open Source models that we install within our own cloud, so that customers themselves decide what happens with their data.
We use models like Microsoft’s Phi, Google’s Gemma, or Hermes from Nous Research, and these are available in multiple variants. Some are small and efficient, others large and powerful. The more powerful the model, the more expensive it is to use.
For each task, MA!N selects the most suitable model, because not every task requires the most powerful one. Just like a truck isn’t the best option for picking up a small grocery item.
This approach results in lower costs, reduced environmental impact, and even faster performance.
In short: we develop AI agents for specific tasks with the flexibility for customers to add their own rules. That’s AI in MA!N: flexible and private.