Slack patches potential AI security issue

by Yaron

Update: Slack has published an update, claiming to have “deployed a patch to address the reported issue,” and that there isn’t currently any evidence that customer data have been accessed without authorization. Here’s the official statement from Slack that was posted on its blog:

When we became aware of the report, we launched an investigation into the described scenario where, under very limited and specific circumstances, a malicious actor with an existing account in the same Slack workspace could phish users for certain data. We’ve deployed a patch to address the issue and have no evidence at this time of unauthorized access to customer data.

Below is the original article that was published.

When ChatGTP was added to Slack, it was meant to make users’ lives easier by summarizing conversations, drafting quick replies, and more. However, according to security firm PromptArmor, trying to complete these tasks and more could breach your private conversations using a method called “prompt injection.”

The security firm warns that by summarizing conversations, it can also access private direct messages and deceive other Slack users into phishing. Slack also lets users request grab data from private and public channels, even if the user has not joined them. What sounds even scarier is that the Slack user does not need to be in the channel for the attack to function.

In theory, the attack starts with a Slack user tricking the Slack AI into disclosing a private API key by making a public Slack channel with a malicious prompt. The newly created prompt tells the AI to swap the word “confetti” with the API key and send it to a particular URL when someone asks for it.

The situation has two parts: Slack updated the AI system to scrape data from file uploads and direct messages. Second is a method named “prompt injection,” which PromptArmor proved can make malicious links that may phish users.

The technique can trick the app into bypassing its normal restrictions by modifying its core instructions. Therefore, PromptArmor goes on to say, “Prompt injection occurs because a [large language model] cannot distinguish between the “system prompt” created by a developer and the rest of the context that is appended to the query. As such, if Slack AI ingests any instruction via a message, if that instruction is malicious, Slack AI has a high likelihood of following that instruction instead of, or in addition to, the user query.”

To add insult to injury, the user’s files also become targets, and the attacker who wants your files doesn’t even have to be in the Slack Workspace to begin with.

GPT-5: everything we know so far about OpenAI’s next frontier model
A MacBook Pro on a desk with ChatGPT's website showing on its display.

There’s perhaps no product more hotly anticipated in tech right now than GPT-5. Rumors about it have been circulating ever since the release of GPT-4, OpenAI’s groundbreaking foundational model that’s been the basis of everything the company has launched over the past year, such as GPT-4o, Advanced Voice Mode, and the OpenAI o1-preview.

Those are all interesting in their own right, but a true successor to GPT-4 is still yet to come. Now that it’s been over a year a half since GPT-4’s release, buzz around a next-gen model has never been stronger.
When will GPT-5 be released?
OpenAI has continued a rapid rate of progress on its LLMs. GPT-4 debuted on March 14, 2023, which came just four months after GPT-3.5 launched alongside ChatGPT. OpenAI has yet to set a specific release date for GPT-5, though rumors have circulated online that the new model could arrive as soon as late 2024.

Read more

  • Computing

OpenAI could release its next-generation model by December
ChatGPT giving a response about its knowledge cutoff.

OpenAI plans to release its next-generation frontier model, code-named Orion and rumored to actually be GPT-5, by December, according to an exclusive report from The Verge. However, OpenAI boss Sam Altman is already pushing back.

According to “sources familiar with the plan,” Orion will not initially be released to the general public, as the previous GPT-4 variants were. Instead, the company intends to hand the new model over to select businesses and partners, who will then use it as a platform to build their own products and services. This is the same strategy that Nvidia is pursuing with its NVLM 1.0 family of large language models (LLMs).

Read more

  • Computing

Apple will pay up to $1M to anyone who hacks its AI cloud
Apple's Craig Federighi speaking about macOS security at WWDC 2022.

Apple just made an announcement that shows it means business when it comes to keeping Apple Intelligence secure. The company is offering a massive bug bounty of up to $1 million to anyone who is able to hack its AI cloud, referred to as Private Cloud Compute (PCC). These servers will take over Apple Intelligence tasks when the on-device AI capabilities just aren’t good enough — but there are downsides, which is why Apple’s bug-squashing mission seems like a good idea.

As per a recent Apple Security blog post, Apple has created a virtual research environment and opened the doors to the public to let everyone take a peek at the code and judge its security. The PCC was initially only available to a group of security researchers and auditors, but now, anyone can take a shot at trying to hack Apple’s AI cloud.

Related Posts

Leave a Comment