Development of AI has become a big part of our lives. It started with statistics and was focused mostly on numbers: precision, recall, etc. But at some point, it became obvious that many AI models were missing something very important: responsibility. Content Safety got everyone's attention with the increasing popularity of Generative AI. How to protect both inputs and outputs.
What can be done other than prompt engineering? Content Safety detects harmful user-generated and AI-generated content in applications and services and can help you with putting guardrails around your Generative AI models.
Join this session to discuss the principles of Responsible AI and some tools to support AI developers. The session will be focused on AI and ML in general, Responsible AI and Content Safety principles and tools. In the end, Veronika will show a demo using some of the tools supported by Azure AI.
You will learn:
About Responsible AI principles and tools
About content Safety principles and tools
Best practices for applying those principles