profile

AI Tech Circle

Key Risks Associated with Generative AI

Published 20 days ago • 4 min read

AI Tech Circle

Hey Reader!

This week, I had a chance to dig deep into the report published by the German Federal Office for Information Security, "Generative AI Models - Opportunities and Risks for Industry and Authorities." This report has covered a few areas, such as the planning, development, and operation phases of generative AI models, where a systematic risk analysis should be conducted.

For those of us involved in organizational projects that employ Large language models, it's crucial to be aware of the potential risks associated with such projects. Fortunately, an excellent resource is available that outlines 28 risks associated with large language models.

By familiarizing ourselves with these risks, we can better plan and execute these projects with greater efficiency and safety.

The report has categorized the LLM risk into 3 areas:

  1. Risks in the context of proper use of LLMs
    • R2. Lack of Quality, Factuality, and Hallucinating
    • R3. Lack of Up-to-dateness
    • R4. Lack of Reproducibility and Explainability
    • R5. Lack of Security of Generated Code
    • R6. Incorrect Response to Specific Inputs
    • R7. Automation Bias
    • R8. Susceptibility to Interpreting Text as an Instruction
    • R9. Lack of Confidentiality of the Input Data
    • R10. Self-reinforcing Effects and Model Collapse
    • R11. Dependency on the Developer/ Operator of the Model
  2. Risks due to misuse of LLMs
    • R13. Social Engineering
    • R14. Re-identification of Individuals from Anonymised Data
    • R15. Knowledge Gathering and Processing in the Context of Cyberattacks
    • R16. Generation and Improvement of Malware
    • R17. Placement of Malware
    • R18. Remote Code Execution (RCE) Attacks
  3. Risks resulting from attacks on LLMs
    • R20. Embedding Inversion
    • R21. Model Theft
    • R22. Extraction of Communication Data and Stored Information
    • R23. Manipulation through Perturbation
    • R24. Manipulation through Prompt Injections
    • R25. Manipulation through Indirect Prompt Injections
    • R26. Training Data Poisoning
    • R27. Model Poisoning
    • R28. Evaluation Model Poisoning

Below is the representation of different Risks across the typical life cycle of an LLM project.


Weekly News & Updates...

This week's AI breakthroughs mark another leap forward in the tech revolution.

  1. OpenELM from Apple: open-source training and inference framework
  2. Phi-3 - SLM (Small language models) from Microsoft is available in two context-length variants, 4K and 128K tokens.
  3. Snowflake Arctic: Largne Language models under the Apache 2.0 license provide ungated access to weights and code.

The Cloud: the backbone of the AI revolution

  • Oracle U.S. Government Cloud Customers Accelerate Sovereign AI with NVIDIA AI Enterprise. availability of Nvidia AI Enterprise on OCI
  • PyTorch/XLA 2.3: Distributed training, dev improvements, and GPUs from Google; XLA is a specialized compiler designed to optimize linear algebra computations for the foundation of deep learning models.
  • NVIDIA to acquire GPU Orchestration Software Provider Run:ai, a Kubernetes-based workload management and orchestration software.

Favorite Tip Of The Week:

Here's my favorite resource of the week.

  • Cohere Toolkit: This collection of prebuilt components enables users to build and deploy RAG applications quickly.

Potential of AI

  • GPT-Author: It utilizes a chain of GPT-4, Stable Diffusion, and Anthropic API calls to generate an original fantasy novel. Users can provide an initial prompt and enter how many chapters they'd like it to be, and the AI then generates an entire book, outputting an EPUB file compatible with e-book readers

Things to Know

  • Tracking new Gen AI models is challenging every week, and here you can find all the details from Stanford University. They are tracking them (along with datasets and applications) in the ecosystem graphs

The Opportunity...

Podcast:

  • This week's Open Tech Talks episode 133 is "The Rise of AI in Creative Writing: Its Impact and Potential with Alex Shvartsman". He’s the author of Kakistocracy (2023), The Middling Affliction (2022), and Eridani’s Crown (2019) fantasy novels. Over 120 of his short stories have appeared in Analog, Nature, Strange Horizons, and many other venues.

Apple | Spotify | Google Podcast | Youtube

Courses to attend:

  • Red Teaming LLM Applications from Deep Learning: Learn to identify and evaluate vulnerabilities in large language model (LLM) applications.
  • CS25: Transformers United V4 from Stanford

Events:

Tech and Tools...

  • CoreNet from Apple: is a deep neural network toolkit that allows training of standard and novel small and large-scale models for various tasks, including foundation models (e.g., CLIP and LLM), object classification, object detection, and semantic segmentation.
  • llamafile: Enables to distribute and run LLMs with a single file
  • IDM-VTON: Improving Diffusion Models for Authentic Virtual Try-on

Data Sets...

Other Technology News

Want to stay on the cutting edge?

Here's what else is happening in Information Technology you should know about:

  • iOS 18 could be loaded with AI, as Apple reveals 8 new artificial intelligence models that run on-device, as reported by TechRadar
  • Nvidia’s Acquisition Of Run:AI Emphasizes The Importance Of Kubernetes For Generative AI, the article written on Forbes

Earlier Edition of a newsletter:

That's it!

As always, thanks for reading.

Hit reply and let me know what you found most helpful this week - I'd love to hear from you!

Until next week,

Kashif Manzoor


The opinions expressed here are solely my conjecture based on experience, practice, and observation. They do not represent the thoughts, intentions, plans, or strategies of my current or previous employers or their clients/customers. The objective of this newsletter is to share and learn with the community.

Dubai, UAE

You are receiving this because you signed up for the AI Tech Circle newsletter or Open Tech Talks. If you'd like to stop receiving all emails, click here. Unsubscribe · Preferences

AI Tech Circle

Kashif Manzoor

Learn something new every Saturday about #AI #ML #DataScience #Cloud and #Tech with Weekly Newsletter. Join with 278+ AI Enthusiasts!

Read more from AI Tech Circle
Logo

AI Tech Circle Stay Ahead in AI with Weekly AI Roundup Read and listen on AITechCircle.com Welcome to the weekly AI Newsletter, where I provide actionable ideas and tips to assist you in your job and business. Today at a Glance: Key takeaways from the article of McKinsey, Implementing Generative AI with Speed and Safety AI Weekly news and updates Generative AI Usecase of the week for you to explore Favorite Tip Of The Week & Potential of AI Things to Know: a framework for Large Language Model...

6 days ago • 6 min read

AI Tech Circle Hey Reader! As artificial intelligence and generative AI continue to advance rapidly, the technology sector is increasingly investing in Gen AI. Gartner predicted that by 2026, over 80% of organizations will be using Gen AI or Gen AI-enabled applications, compared to less than 5% in 2023. The Oracle database is one of the world's most popular databases, known for keeping up with industry trends and providing innovative solutions to businesses. As the internet boom arrived,...

13 days ago • 7 min read

AI Tech Circle Hey Reader! I am back this week after taking a break for the Eid holidays. Recently, the UAE experienced the heaviest rainfall in its history, which posed significant challenges. The UAE leadership stepped forward to address the situation, and civil service departments and volunteers worked tirelessly day and night to support the residents affected by the heavy rain. This week started with the event 'MachinesCanSee' at the Museum of the Future, Dubai. It was quite a good...

28 days ago • 5 min read
Share this post