Bernd Buchegger (AK Digital Sales) and Walter Wratschko (AK Data Protection) presented the new EU law on AI in the focus talk “AI & Data Protection” and clarified who is the author of AI works.
Why spend minutes poring over a text when artificial intelligence (AI) spits it out in seconds? Of course, the tools are a welcome help, but they can quickly lead to unpleasant side effects, as the focus talk “Artificial Intelligence & Data Protection” by the Software Internet Cluster (SIC) and the UBIT specialist group on March 5, 2024 showed. “Data that you enter into ChatGPT is used by the system for self-training. If you enter your own company data in the open version, it can be shared with other questioners. And that with far-reaching consequences,” revealed Bernd Buchegger from the “Digital Sales” working group.
Be careful, data trap!
Walter Wratschko from the “Data Protection” working group highlighted the risks of AI in terms of data protection. The frequently used tools such as ChatGPT are text generators; the written word is more important than factual accuracy. “Also pay attention to possible discrimination and bias in the results,” says Wratschko. Other pitfalls include misuse of data, lack of transparency and non-compliant use of personal data. And the latter in particular can happen quickly if, for example, you let the AI evaluate an Excel file. To be on the safe side, you should pseudonymize the data before loading it into an AI tool. “So that you don’t also make the data available to the AI for training, I recommend that companies use APIs instead of free accounts,” says Buchegger.
The law is intended to provide clarity
Recommendations are one thing, laws are another. The EU is now trying to create a legal framework for the use of artificial intelligence with the so-called “AI Act”. “The law will be passed in the coming months. If it comes into force, companies will have to comply with all regulations after 24 months,” said Wratschko. And what will the regulatory framework look like? Four levels of risk in AI have been defined: 1. Unacceptable risk, 2. High risk, 3. Limited risk and 4. Minimal or no risk. To reassure you – most of the AI tools currently used belong to the 4th group, including spam filters, for example. The situation is different when using chatbots that fall into the “limited risk” category – here you should be aware that they are interacting with a machine so that you can make an informed decision. Wratschko appeals to common sense when dealing with AI: “Don’t be afraid of AI, but of natural stupidity and human greed!”
AI cannot be the author
The focus talk answered another, often hotly debated question: Who is the author of what the AI creates? “Only a natural person can be the author,” Buchegger immediately made it clear. As the writer of the prompt for the AI, you become the author. That’s why you can freely use an image created by the AI that you have defined in advance with your prompt. But be careful: Please read the license terms of the respective tool in advance! “If you are unsure about texts, don’t just copy-paste them,” says Buchegger. There is currently uncertainty regarding the labeling requirement for AI-generated content. While this applies in France and Canada, Austria is still lagging behind. Buchegger: “Some platforms such as Facebook and Instagram have required labeling of AI-generated content since February 2024. The option has existed on TikTok since 2023.”
Finally, there were practical link tips for the numerous online and in-person visitors:
- With the “WKO KI Guideline Generator”, you can generate AI guidelines tailored to your business
- With the “EU AI Act Compliance Checker” you can check the AI products used in your company to see whether they function in accordance with the EU AI Act
- If you want to use a European AI tool instead of ChatGPT, Mistral is a good option: https://chat.mistral.ai
These tools uncover AI content (excerpt):
- GPTZero (https://gptzero.me)
- ai (https://originality.ai)