Salesforce launches Einstein Copilot AI assistant into beta

Salesforce Inc. today announced the beta availability of Einstein Copilot, a customizable generative artificial intelligence conversational assistant for every Salesforce application that is also capable of executing actions on the behalf of users.

Einstein Copilot integrates directly into Salesforce and understands the context of what users are doing, providing them the ability to “chat” with their corporate data. That allows them to ask questions in natural language and receive trustworthy answers grounded in their own company data drawn from within their own cloud.

“One of the key differentiators at Salesforce is that we have a pretty keen idea on what people do in the enterprise, especially when it comes to customer relationship management, but also all the enterprise applications,” Jayesh Govindarajan, senior vice president of AI at Salesforce, told SiliconANGLE in an interview. “Copilot is a new front end for people to get work done through conversational means.”

The new copilot is also set up with pre-defined “actions” that allow it to perform a variety of business tasks that are associated with the application that it is running alongside that take the burden off the user. These out-of-the-box actions are already configured out-of-the-box, Govindarajan explained, and could trigger events such as committing sales events, completing tickets or completing single action items.

Sometimes users will want to do multiple actions or chains of actions, for example a return might trigger the need for retrieving information about a user, looking up the item that needs to be returned, getting the return address, producing a shipping label, contacting the shipping service, updating the ticket and emailing the user how to complete the return. All of this is done using a reasoning engine that can break down the actions that are needed by understanding the context and the sequence needed.

“One of the key things about copilot is being able to give it instructions that are higher order which require the engine to do orchestrate not one but multiple actions in a certain order,” said Govindarajan. “So, the copilot comes with, in addition to out-of-the-box actions, the ability to interpret the ask or the task, and then break it down based on the interpretation.”

This “reasoning” engine means that Einstein Copilot’s underlying AI model can take a question from anyone in an organization, compare it with company data and tasks and quickly come up with an action set to complete it. The ability to use basic natural language turns what would have been a multistep task across an application user interface into an easier workflow.

Customers are not limited to the predefined actions given by Salesforce. Govindarajan said that though the system is already quite powerful when used with out-of-the-box actions based on the context within applications and the reasoning engine’s capability of determine the best possible tasks it gets even better when customers can extend the system. Salesforce is working with early design partners to make the system as extensible as possible as enterprise users can bring their own actions that the AI can interact with and orchestrate.

With Einstein’s great power, Govindarajan said, also comes increased need for security and trust. As a result, Einstein is built with a trust and access layer that the AI passes through that adheres to privacy and security measures including guardrails over the conversational capabilities.

“You have access to the trust layer, that is a known set of actions that are tied to you — you as a user, based on your access levels in the company,” said Govindarajan. “It’s based on what you know what actions you have access to, and what data you have access to.”

At the same time, it masks personally identifiable information, checks outputs for bias and toxicity, protects sensitive information and breaches and ensures that proprietary information doesn’t leak into the AI model. he said.

To reduce the possibility of hallucinations, which is a type of error when an AI large language model confidently states something completely incorrect, Salesforce grounds all information that goes into Einstein Copilot based strictly on enterprise data and only the actions it is intended to perform. Govindarajan said that making certain the AI model only has a strict set of actions and context that it operates within greatly increases the accuracy of the model.

Image: Salesforce
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.  
Join the community that includes more than 15,000 #CubeAlumni experts, including CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy


{Categories} _Category: Platforms,*ALL*{/Categories}
{Author}Kyt Dotson{/Author}
{Keywords}AI,NEWS,The-Latest,Top Story 4,artificial intelligence,Einstein Copilot,Einstein Trust Layer,Jayesh Govindarajan,large language model,LLM,salesforce Inc{/Keywords}

Exit mobile version