Patna: OpenAI has introduced two new security features in its popular AI chatbot ChatGPT to help prevent data leaks and protect users from online threats. The new tools, called Lockdown Mode and Elevated Risk Labels, are designed to make conversations with the AI safer, especially at a time when cyber fraud and data theft cases are increasing rapidly.
The move is seen as important in countries like India, where more people are using digital payments, Aadhaar-linked services and online banking every day. As the use of artificial intelligence grows in financial and government services, concerns about privacy and data security are also rising. Experts have warned that AI systems can sometimes be tricked into sharing sensitive information if not properly protected.
One major cyber threat that has emerged recently is known as “prompt injection”. In this type of attack, hackers hide harmful instructions inside a document or website. If a user asks an AI tool to read or analyse that content, the AI may unknowingly follow the hidden instructions and reveal confidential data. For example, a person might ask ChatGPT to summarise information from a suspicious website, not realising that the page contains secret commands trying to pull private data from connected systems.
To deal with this risk, OpenAI has launched Elevated Risk Labels. This feature gives a clear warning on the screen if ChatGPT is about to connect to an external website or third-party app that may expose more data. The alert appears before any action is taken, allowing users to decide whether they want to continue. This helps users stay informed and careful while using the AI.
The second feature, Lockdown Mode, works like a “safe mode” for sensitive conversations. When this mode is turned on, ChatGPT limits its connection to outside systems, third-party apps and live web tools. This reduces the chances of data leaks. The feature is expected to be especially useful for journalists, corporate professionals, government officials and anyone handling confidential information. With these new measures, OpenAI hopes to build greater trust and make AI tools safer for everyday use.






















