TESTIMONIALS

“Received the latest edition of Professional Security Magazine, once again a very enjoyable magazine to read, interesting content keeps me reading from front to back. Keep up the good work on such an informative magazine.”

Graham Penn
ALL TESTIMONIALS
FIND A BUSINESS

Would you like your business to be added to this list?

ADD LISTING
FEATURED COMPANY
Interviews

AI and security concerns

by Mark Rowe

It’s hard to look anywhere online without being bombarded with all things AI, this includes multiple surveys and reports detailing the huge productivity gains available from the use of Generative AI, says Kulbinder Dio, founder of bionicGPT.

Coupled with this though we see multiple reports detailing the fear of organisations to roll these products out. The main issue being security concerns around loss of control around their data. Data residency rules and regulations such as GDPR make this even more difficult. To get the best productivity gains out of Gen AI you need to use your own data.

These security concerns are even more critical when it comes to defence and security organisations that have data of a highly secret nature. They can’t just upload this into a public service like ChatGPT. So how can companies get these benefits without the security issues?

So instead of taking your data to the AI system, why not bring the AI to your data. It is possible to run different large language models locally, these are the models that sit behind these generative AI systems. Your only restriction being your hardware availability. Many projects have been created that allow you to do this locally on your own machine, but what if you want an enterprise ready solution. bionicGPT is an on-premise Generative AI system for the thousands of organisations that have banned the use of externally hosted services such as ChatGPT due to security concerns. It has all the security benefits of running generative AI locally with the enterprise features required to roll this functionality out to your whole organisation.

What can you actually do with these systems?

Retrieval Augmented Generation (RAG) – Let’s say you have a highly secret project with a large number of documents. RAG is a way to use content from these documents to generate answers to your questions. Your documents are loaded into a special database known as a vector database which can perform semantic searches. Information retrieved from this is then passed to the Generative AI system along with your query where it can generate a response. This is great but once again you have issues with who has access to this information within your organisation. bionicGPT comes to the rescue once again by providing a permissioning model that allows you to decide who in the organisation should have access to what documents.

Specialist models : A huge number of open source models now exist that have been trained on very specialist data sets. You don’t have to be limited to a single model. You can set up different models for different parts of the organisation or even different teams. There are now multiple different models specialised in acting as programming copilots. IBM has just open sourced it’s Granite programming model, trained on 116 different programming languages. You also have specialist ‘hacker’ models that you might make available to your pen testing team for Red teaming. A general model for the wider organisation to help word marketing material or reply to emails. Your support desk with access to all the company’s product information loaded into the RAG system to help answer questions quicker. There are hundreds of use cases. Once you start looking, more come out of the woodwork.

Shadow IT – Where services are banned but employees still use them creating an even bigger security risk

So how can you make sure your Chief Risk Officer and CISO don’t have a heart attack; AI observability. One of the other advantages of bionicGPT is that all communication between the models and users is logged. Your compliance teams can check who has asked for what and what responses they have received.

Glossary

LLM – These are the models that sit behind generative AI systems. A large language model is a language model notable for its ability to achieve general-purpose language understanding and generation. LLMs acquire these abilities by learning statistical relationships from text documents during a computationally intensive self-supervised and semi-supervised training process. (Wikipedia)
Generative AI – Generative AI refers to deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on.
RAG – Retrieval Augmented Generation: the process of optimizing the output of a large language model, so it references an authoritative knowledge base outside of its training data sources before generating a response.
Hallucinations : occurs when a large language model (LLM) generates a response that is either factually incorrect, nonsensical, or disconnected from the input prompt.

Visit https://bionic-gpt.com/.

Related News