Thursday
18/04

Come domare le reti neurali nei modelli di intelligenza artificiale

Description

Here be neurons.* How to tame neural networks in AI models, for us, in newsrooms and in society – using our human neural networks wisely.
In 2023, we entered uncharted territory. After OpenAI made ChatGPT publicly available in late 2022, our society collectively gained its first experience with generative language models. The abstract idea of artificial intelligence suddenly became something everybody could play around with - but hardly anybody had an idea how to do so wisely. The vacuum of inexplicability (due to unknowns of the models behind it) was quickly filled with stereotypes. In this panel discussion we are going to emphasise the importance of understanding our own neural networks - aka “our brains” - better in order to tame the artificial neural networks that have been unleashed. Clearly, we need new skills to be able to use AI to our advantage – not to our demise.
Even though newsrooms all over the world have been experimenting and using AI tools, ChatGPT opened up a whole new world. It is no longer an option, but a necessity to know how to “probe” generative and other AI algorithms. But besides the obvious choices to let ChatGPT summarise articles or answer readers’ questions, some clever newsrooms started to train their own models, for example, to be more constructive. Thus, newsrooms have to ask themselves: How can we use AI to make our work better? To answer this question, they need an almost magical range of capabilities: experimentation and critical thinking, new skills and a rigorous infrastructure, not to forget an understanding of their own shortcomings.
Society is facing massive – and often scary – turning points. What kind of regulation will ensure that AI is used exclusively for the common good? Who defines the “common good”? The European Union's AI Act uses a risk-based framework for this, the details of which are likely to be outlined soon. We will discuss which gaps in the legislation should be monitored particularly critically by journalists. The overarching question affects us all: is value-based development and use of AI technology possible? And whose values should those be?
This panel will analyse the above questions using concrete examples and experiences from media organisations, as well as scientific insights from neuroscience and media psychology. As in previous years, we will actively involve our audience in the session and encourage them to share their own experiences. Our most important goal is for the audience to take away insights for their personal challenges from the session.
*The title refers to the Latin phrase "Hic sunt dracones" ("Here be dragons"), which was used on maps to mark dangerous territories. We think it's a fantastic metaphor to describe the uncertainties in society's dealings with artificial intelligence and other new technologies.

Timings

17:00- 17:50

Entry

Free

Category

Culture