30
- AI, In the past year, cyberattacks have surged in speed, scale, and complexity.
- Defenders, however, are now learning to leverage generative AI to keep themselves secure against adversaries.
- Microsoft and OpenAI are working together to constantly uncover new threats and modes of attack. They’re identifying everything from tricky prompt injections to clever misuse of large language models (LLMs).
Here are some core principles that Microsoft follows on how to navigate the risks of the digital era:
- Identification and action against malicious threat actors’ use: On identifying malicious actors using AI services, APIs, or systems, swift action is taken to disrupt their activities. This includes disabling accounts, terminating services, or restricting access to resources.
- Notification to Service Providers: When Microsoft identifies malicious actors using another service provider’s AI, AI APIs, services, or systems, they promptly notify the service provider and share relevant data. This enables the service provider to independently verify findings and take action in accordance with their own policies.
- Collaboration: Collaboration with stakeholders is constantly taking place to exchange information regarding the detected use of AI by threat actors, encouraging collective and effective responses across the ecosystem.
- Transparency: In line with responsible AI practices, efforts are made to inform the public and stakeholders about actions taken based on these principles, ensuring transparency.
As AI leaps forward, so do the threats it brings. It’s important to come together, share knowledge, and work collectively to secure a safer digital future for everyone involved.
For more such News Kindly Check blogoday.com