This The Transform Technology Summit begins October 13 with a low-code / no-code: Enabling Enterprise Agility. Register now!


While Datarobot’s management team looks to the future, they see a world where AI is part of every enterprise business decision. If the computer is not functioning fully autonomously, it increases the human intelligence, causing a tingling in the ears. Today, DataRobot launches its DataRobot AI cloud platform to operate everywhere and handle a diverse collection of roles in classification, investigation and decision making.

The new release enhances the options for creating pipelines that turn incoming data into business decisions. It runs on campus or in all major clouds. Any company that wants the SAAS option can pay by call.

This 7.2 version adds features to a platform known as a good, low-code way to experiment with artificial intelligence. Data and Out comes a model that can be deployed as a service module in the enterprise stack.

Datarobot expands options by expanding Pathfinder, a collection of new, pre-coded routines that simplify many cases of general use. Predicting loan defaults, choosing how much stock to order for the store branch, or marking insurance fraud is all set up in advance and ready to run with just a few customizations.

The new version will also provide more options for creation and deployment. At the same time, it can monitor decisions for potential bias and – perhaps – correct them.

To understand a little more about this new release, we sat down with Nenshad Bardoliwala, Senior Vice President of DataRobot, the product responsible for the new launch.

Venturebeat: Let’s start with the big picture. First, you’re unveiling a large umbrella, a datarobot that will create a platform that will combine experiments with daily deployments on the front line.

Nenshad Bardoliwala: We like to talk about the fact that most of the AI ​​projects that people do today we call experimental AI. They pull some data sets, they run a few experiments, but they never organize the model into a product or make it part of their business process. There are many failed projects following AI investment. We believe that literally every opportunity in business can be an AI opportunity and basically everyone in business can have the power of a data scientist – if you create software, it allows you to democratize these capabilities.

So we are going to announce AI Cloud. It is a single system for each organization to accelerate the delivery of AI in production. We are a company that was started in 2012, and so we have spent almost a decade and more than 1.5 million engineering hours to bring this platform to market. We also have a very interesting difference that we are one of the few companies, if not the only ones that have helped other companies put many, many solutions into production. Many of DataRobot’s hundreds of customers actually have AI and ML initiatives. The main principle we find for what AI is defining for the cloud is that you need a single platform for a wide variety of different user types.

VentureBeat: So what does it mean for a user who wants to turn data into decisions?

Bardoliwala: The idea behind the singular platform is that we want to make each component parts as easy as possible to flow from one end of the life cycle to the other. If you use our automated machine learning capabilities, you can deploy them to our ML OPs capabilities with a single click. The life cycle is over. However, if you choose to use a different solution to build your models – let’s say you are using an open source library to build your models – you are still very good for those models and those ML ops for ML Can arrange. But being part of the platform won’t get you a one-click experience.

VentureBeat: When I think of DataRobot, I think of the low-code tool that offers plenty of hand-holding for the desktop system. How is it changing or increasing?

Bardoliwala: Historically, you are absolutely right that our primary user has been a citizen data scientist in that low code, graphical user experience. We’ve made significant investments, especially with this launch of 7.2 in the AI ​​cloud, people are really turning off the ability to use code. So with this release, we have three new capabilities that spread the spectrum in different ways that coders can actually participate in the platform.

The first is cloud-hosted notebooks. We believe that the world is polygonal. So from a programming language perspective, we have the ability to stitch together into a single notebook R code, Python code, SQL code and Scala. All for different paragraphs inside the notebook. Now you can use any well-intentioned language for the tasks at hand.

Venturebeat: So it’s at the notebook level. Can you go ernda?

Bardoliwala: Yes! In our automated machine learning product, we’ve introduced a capability called Composable ML, which lets you go deeper. Datarobot’s automation will generate a pipeline for you that will have feature preprocessing steps as well as a specific algorithm. Because again, we want to mix the best of both humans and machines, now we allow you to take any block from it inside the platform and replace it with your own code.

So you might say, “Oh, I don’t like the way DataRobot does hot encoding? I will click on that block and upload my own R or Python code to the system so that it can be replaced for something already created by DataRobot.

Venturebeat: And if you still want more control?

Bardoliwala: We are introducing the Datarobot Pipelines product, which allows you to set up complex, predictable and training pipelines, which, again, can spread multiple languages ​​from SQL to Python and all together in a reproducible, high-fidelity, high-pipeline environment. We’ve added it to our portfolio as part of AI Cloud and 7.2 releases. It’s a big, big investment for us.

Venturebeat: This is all during development. Tell me your plans for working with the deployment – and create a feedback loop so your AI can learn from the submitted code.

Bardoliwala: When a model is deployed, it is placed on a production class infrastructure with a web service front-end that allows you to send input data for the model or deployment and input data for the system. Then we ask him to return the predictions, right? But what makes it really interesting is that the models can actually get stale over time. Just because you train something today, the world changes and the data suddenly changes.

So what we have introduced is a really powerful capability that we call continuous AI, which is also a part of this release. And the idea is that when you deploy a model, you’re actually looking at all the different aspects of the model: is the data drift, is the accuracy changing, is the service latency changing? And you can set the threshold at a certain point – let’s say the model starts making poor predictions. Then, constantly in AI, we actually – and this again speaks to the story and integration of the platform – we’ll actually go and start a new set of automated training routines so we can find better models with the latest data that can happen later. Substitute in ML Ops Deployment. So constantly improving the quality of your models in this life cycle or, at the very least, maintaining a certain level of performance is something that is very unique to a datarobot.

Venturebeat: This feedback can take many forms, right? I noticed that you are starting to talk about monitoring AI for bias.

Bardoliwala: So bias observation is really, really interesting. The capability we introduced last year is the capability that allows you to look at the system for potential bias when training a model. So the data scientist end user can actually label the protected classes of data – for example, ethnic group or gender – within their data set. And then the datarobot will go ahead and say, “You know, based on what fair metrics you use – let’s say it’s proportionality – we’ve noticed that you are disproportionately favoring one ethnic group over another.”

VentureBeat: This becomes part of the business process, and users can take note of it and work on it, right?

Bardoliwala: Yes, we want to be able to do this when customers are actually in production, when they are actively receiving new requests for forecasts. They want to know if the model is starting to produce biased results. So in this release, we have the ability to actually observe changes in model behavior, where the model begins to treat some populations unfairly.

The second is that we detect and exceed certain thresholds, we can start sending alerts to the deployment owner, “Hey, your models are not the way you want them to be, based on your policies and ethical rules. Organization.”

Venturebeat

VentureBeat’s mission is to become a digital town square for technology decision makers to gain knowledge about changing technology and practices. Our site delivers essential information on data technology and strategies to guide you as you lead your organizations. We invite you to become a member of our community, access access:

  • Up-to-date information on topics of interest to you
  • Our newsletters
  • Gated idea-leader content and discount access access for our precious events, such as Transformation 2021: Learn more
  • Networking features, and more

Become a member