AIaaS can help you jumpstart your AI projects, but there are some details you need to know. Here’s what to look for and five leading options to consider.

dinging service bell in robot hand by andrey popov via shutterstock
Credit: Andrey_Popov / Shutterstock

AI as a service (AIaaS) explained

Artificial intelligence (AI) is not flash in the pan — it is here to stay. Gartner says more than 80% of enterprises will have used some form of generative AI APIs or applications by 2026. If you plan to be among those 80%, then you have to determine the best way to train and deploy it, on premises or in the cloud.

AI training requires specialized hardware that is very, very expensive compared to standard server equipment. It starts at the mid-six figures and can run into the several-million-dollar range. And that hardware cannot be repurposed for other uses such as databases.

[ Download our editors’ PDF AI as a service (AIaaS) enterprise buyer’s guide today! ]

In this buyer’s guide

  • How AIaaS works
  • Is AIaaS right for your organization?
  • Key criteria for choosing an AIaaS platform
  • Leading AIaaS vendors
  • Essential reading

In addition to purchasing and maintaining the AI hardware, there is the model on which your AI application is based. Training is the difficult part of AI and the most process-intensive. Training can take weeks or even months, depending on the size of the data set. That could be months you don’t have.

So you have the option of acquiring the hardware and doing it yourself or turning to an AI-as-a-service provider. AIaaS is the latest in the as-a-service market, specifically oriented around AI initiatives. It is typically provided by major cloud service providers, but smaller vendors are entering the market as well. Many AIaaS providers offer not only the hardware for lease but also prebuilt models, which can shave months off the time to deploy.

Why enterprises need AI as a service (AIaaS)

AI was already on the radar for many companies, but when ChatGPT and generative AI (genAI) exploded onto the scene in late 2022, businesses felt a greater sense of urgency to adopt it, says Mike Gualtieri, a principal analyst at Forrester Research.

“With generative AI, they can just use a model that’s prebuilt, and that’s largely what they’re doing. So they don’t need to buy their own infrastructure. Many of them are thinking they’re going to fine-tune an open-source model,” he says.

AIaaS provides customers with cloud-based access for integrating and using AI capabilities in their projects or applications without needing to build and maintain their own AI infrastructure. It also offers prebuilt and pretrained models for basic uses like chatbots so the customer doesn’t have to go through the process of training their own.

“Basically, AI-as-a-service enables you to accelerate your application engineering and delivery of AI technologies in your enterprise,” saysChirag Dekate, an analyst for AI infrastructures and supercomputing at Gartner.

The big selling point for AIaaS is the cost of the hardware needed to run AI operations on premises. Enterprises used to buying commodity x86 servers are in for sticker shock when it comes to AI hardware. Nvidia’s DGX Pod H100 server starts at around $500,000, while the much larger SuperPOD starts at $7 million. For most firms, especially small and medium businesses, this can be prohibitively expensive, and renting time via AIaaS is the only option.

Because of the significant expense involved with hosting AI models on-premises, AI as a service is very good for experimenting, because you only pay for what you need, says Forrester’s Gaultieri. “So you don’t want to ramp up your own infrastructure for 30 use cases when maybe [only] a few of them are going to work. But you do need the infrastructure to test it. So it’s a very good experimental playground — very economical for enterprises,” he says.

Besides the cost of the hardware, there’s also the issue of availability. Nvidia is dealing with a tremendous backlog of orders; if you are trying to acquire an on-premises SuperPOD today, you will have to wait for three to eight months. “A baby is going to be born sooner than you can deploy your infrastructure on premises,” says Gartner’s Dekate. “That is the kind of supply-demand imbalance that we are seeing in the marketplace today.”

So even if you want to buy and have the money, your only option for fast deployment might be to rent from an AIaaS provider.

Here are some benefits of using AIaaS:

  • Much lower barrier to entry: AI hardware is significantly more expensive than standard server hardware. AIaaS is also much more expensive than standard cloud services, but that is still much cheaper than acquiring the hardware.
  • Faster time to market: Installing and configuring AI hardware on-premises is expensive and time-consuming — if you even have the talent needed to deploy and support it. Leaving the management of the hardware up to a cloud service provider will save you a great deal of time and free you to focus on your business and core competency.
  • Access to cutting-edge technology: Getting on board with AI can be a competitive advantage, and it’s in AIaaS providers’ best interests to keep improving and innovating. Using their service lets you stay ahead of the curve.
  • Scalability: This is the cloud, where scalability is a primary selling point. You can scale the AI services up or down to suit your needs.
  • Access to AI expertise: AI is still in its very early stages, and the population of IT professionals who can configure and manage the hardware is limited. And most of the people who can operate AI hardware are working with the cloud service providers to begin with.

However, there are always drawbacks to everything, and AIaaS is no exception:

  • Vendor lock-in: Switching providers might be difficult once you’re using a provider’s platform.
  • Limited customization: Prebuilt models are ideal for general-purpose use but may not perfectly fit your needs if they are highly specific. Thus, you may be forced to build and process your own models rather than use an open source one.
  • Security and privacy concerns: Sharing your data with a third-party provider requires careful consideration.

Doing the processing in the cloud relieves you of the burden and obligation of AI hardware in your data center. Your greatest concern will be your data and where it resides. If the data that you are building your models from is stored on Amazon Web Services, Google Cloud, or Microsoft Azure, then very little movement needs to be done.

But if you are building your models from your own data, you will need a big, fast pipe to move all of your data from the data center into the cloud. So networking will be the main hardware area of concern.

What to look for in AI as a service (AIaaS)

AIaaS offers three entry points: the application level, the model engineering level, and the custom model development level, Dekate says. If you’re a relatively low-maturity enterprise and want to get started in genAI, you can use it at the application layer. Or if you want to manage your own models, you can do them deeper down the stack.

AIaaS providers offer data preparation, because it is often unstructured, as well as the training of models provided by the customer or the option to use prebuilt AI models they provide. These models, trained on massive data sets, can perform various tasks like image recognition, data analysis, natural language processing, and predictive analytics.

You access the services through APIs or user interfaces. This allows you to easily integrate the AI functionality into your own applications or platforms, often with minimal programming required.

Most AIaaS providers offer a pay-as-you-go model, either through metered use or a flat rate. It is much pricier than your traditional IaaS/PaaS scenario. Nvidia, for example, charges a flat rate of $37,000 per month to use its DGX Cloud service.

The key criteria for choosing an AIaaS platform are as follows.

Workloads supported: According to Forrester’s Gaultieri, the No. 1 criterion in choosing an AIaaS provider is whether it supports the workloads for all three steps in AI: data prep, model training, and inferencing. Data prep is often overlooked in the AI discussion, but it has to be done, because the data AI taps into is often unstructured and stored in data lakes of unprocessed data.

Regional infrastructure: Gartner’s Dekate says the customer’s top priority should be whether the provider has sufficient scale capacity in their region and domain of execution. Many enterprises are global organizations, and not all cloud providers have resources distributed globally.

Match your needs with proven expertise: Search for vendors with experience in your specific industry or with projects addressing similar challenges. Ask for case studies, customer references, and testimonials showcasing their accomplishments.

Specify the type of AI you want to deploy: Image recognition is different from intrusion detection, which is different from a chatbot. An AIaaS provider may not specialize in every form of AI, so make sure that their specialization meets your needs.

Data and compliance compatibility: Make sure the vendor’s platform supports your data format and volume. If your data is of a highly regulated sort, make sure that the provider is certified to handle it.

Scalability: AIaaS providers may not have the capacity if your demand continues to grow. While the future is hard to guarantee, especially in a fast-growing industry like AI, it is nonetheless advisable to get some kind of future performance promises if at all possible.

Model updates and maintenance: AI models almost never process once and then never again. They require regular and routine updating. Clarify the provider’s policy on storing the model, updating it, and the possibility for you to take the model back on-premises and out of their system.

Workload management software: Finally, consider the provider’s software to manage workloads, particularly making sure that the provider can restart a workload if there is a problem during processing. “Imagine if you’re building an LLM [large language model] and you run it for a week and then something goes wrong,” says Gaultieri. “If it’s a multiweek workload, you don’t want to start over. So do they have things like checkpointing so that you can restart workloads?”

Leading vendors for AI as a service (AIaaS)

AIaaS is no small effort. While newer providers like Nvidia and OpenAI, and even some managed service providers, are getting in on the act, the major providers in AIaaS today are the giant cloud service providers, because they’re the companies with the financial wherewithal to support AIaaS at enterprise scale. Their offerings include the following.

AWS AI: Amazon Web Services has a broad array of AI services, starting with prebuilt, ready-to-use services that can help jumpstart an AI project and get around the need for experienced data scientists and AI developers. These services include Amazon Translate (real-time language translation), Amazon Rekognition (image and video analysis), Amazon Polly (text-to-speech), and Amazon Transcribe (speech-to-text).

Managed infrastructure tools include Amazon SageMaker for building, training, and deploying machine learning models, Amazon Machine Learning (AML) drag-and-drop tools and templates to simplify building and deploying machine learning models, Amazon Comprehend natural language processing, Amazon Forecast to provide accurate time series forecasting, and Amazon Personalize with personalized product and content suggestions.

Under generative AI, AWS offers Amazon Lex to build conversational AI bots, Amazon CodeGuru code analysis and recommendations for improved code quality and security, and the Amazon Kendra intelligent search solution.

Google Cloud AI: Google’s AI service specializes in data analytics, with tools like BigQuery and AI Platform and via offering the AutoML service, which features automated model building for users with less coding experience.

Google Cloud AI offers a unified platform called Vertex AI to streamline the AI workflow, simplifying development and deployment. It also offers a wide range of services with prebuilt solutions, custom model training, and generative AI tools.

The AI Workbench is a collaborative environment for data scientists and developers to work on AI projects, with AutoML to automate much of the machine learning workflow and MLOps to manage the machine learning life cycle more efficiently by ensuring that machine learning models are developed, tested, and deployed in a consistent and reliable way.

Google Cloud AI has several prebuilt AI solutions:

  • Dialogflow: It is a conversational AI platform for building chatbots and virtual assistants.
  • Natural Language API: It analyzes text for sentiment, entity extraction, and other tasks.
  • Vision AI: It processes images and videos for object detection, scene understanding, and more.
  • Translation API: It provides machine translation across various languages.
  • Speech-to-Text and Text-to-Speech: It converts between spoken language and text.

For generative AI, Vertex AI Search and Conversation is a suite of tools designed for building generative AI applications like search engines and chatbots, with more than 130 foundational pretrained language models like PaLM and Imagen for advanced text generation and image creation.

Google has announced its new Gemini model, the successor to its Bard chatbot. Gemini reportedly is capable of far more complex math and science generation, as well as generating advanced code in different programming languages. It comes in three versions: Gemini Nano for smartphones; Gemini Pro, accessible on PCs and running in Google data centers; and Gemini Ultra, still in development but said to be a much higher end version of Pro.

IBM Watsonx: IBM Watson AI-as-a-service, now known as Watsonx, is a comprehensive array of AI tools and services known for its emphasis on automating complex business processes and its industry-specific solutions, particularly in healthcare and finance.

Watsonx.ai Studio is the core of the platform, where you can train, validate, tune, and deploy AI models, including both machine learning models and generative AI models. The Data Lakehouse is a secure and scalable storage system for all your data, both structured and unstructured.

The AI Toolkit is a collection of prebuilt tools and connectors that make it easy to integrate AI into your existing workflows. These tools can automate tasks, extract insights from data, and build intelligent applications.

Watsonx also includes multiple pretrained AI models that you can use immediately and without any training required. These models cover tasks such as natural language processing, computer vision, and speech recognition.

Microsoft Azure AI: Microsoft’s AI services are geared towards developers and data scientists and are built on Microsoft applications such as SQL Server, Office, and Dynamics. Microsoft has integrated AI across its various business applications in the cloud and on-premises.

Microsoft is also a dealmaker, securing partnerships and collaborating with leading AI companies including ChatGPT creator OpenAI. Many AI apps are available in the Azure Marketplace.

The vendor offers multiple prebuilt AI services, such as speech recognition, text analysis, translation, vision processing, and machine learning model deployment. It also has an OpenAI Service with pretrained large language models like Codex, DALL-E 2, and GPT 3.5.

Oracle Cloud Infrastructure (OCI) AI Services: Oracle up till now has been running well behind the market leaders in cloud services, but it does have several advantages worth considering. Oracle is, after all, a major business application company, from its database to its applications. All of its on-premises applications can be extended to the cloud for a hybrid setup. This makes moving your on-premises data to the cloud for data prep and training easy and straightforward. It’s a similar advantage that Microsoft has with its legacy on-premises applications.

Oracle has very heavily invested in GPU technology, and that is the primary means of data processing in AI at the moment. So if you want to run AI apps on Nvidia technology, Oracle has you covered. And Oracle’s AI services are the most affordable among the cloud service providers, an important consideration given how expensive AIaaS can get.

OCI AI Services presents a diverse portfolio of tools and services to empower businesses with various AI functionalities. Like IBM’s Watsonx, it’s not just one service but a collection of capabilities catering to different needs, including fraud detection and prevention, speech recognition, language analysis, and document understanding].

Oracle’s generative AI Service supports LLMs like Cohere and Meta’s Llama 2, enabling tasks like writing assistance, text summarization, chatbots, and code generation. Oracle Digital Assistant’s prebuilt chatbot frameworks allow for rapid development and deployment of voice- and text-based conversational interfaces.

The Machine Learning Services offers tools for data scientists to collaboratively build, train, deploy, and manage custom machine learning models. They support popular open-source frameworks like TensorFlow and PyTorch.

Finally, there is OCI Data Science, which provides virtual machines with preconfigured environments for data science tasks, including Jupyter notebooks and access to popular libraries, simplifying your data exploration and model development workflow.

Essential reading

Exit mobile version