NORA-AI Meet 2022, Short Summary of Day 1 Keynote

Sanyam Jain
6 min readJan 28, 2023
Image designed using Canva (Free online image editor)

The conference kicked with a total of ~113 participants on day 1 (As I heard in an announcement). Stunning kick start from the CEO (Klas Pettersen) of NORA — The Norwegian Artificial Intelligence Research Consortium where he and colleagues shared the stage and discussed about the Challenging times in AI and the importance of XAI in such scenario. Addition to that, he announced that NORDIC AI MEET’23 most likely going to be in Copenhagen, Denmark. In the last, but not least, Klas informed about Nordic Machine Intelligence Journal for the researchers. And some of the support links I found were — 1 (NORA national research school for AI), 2 (Norwegian Artificial Intelligence Research Consortium Research School), 3 (GitHub handle) and 4 (Journal Homepage with announcements)

First keynote was from Prof. Anders C. Hansen, University of Cambridge, UK where he talked (on the the topic — The mathematics of “why things don’t work” — On the potential and limits of AI), and presented a deck of 60+ informative slides:

  • Paradigm shift can be seen in the research community. Where people used to talk about deep learning as a magical wand to do anything (Not talking about the usage of DNN as Universal Approximation), now concerns about limitations of current modern world AI. Articles that got popularity are here! The point number 8 which he emphasised a little about Deep Learning models being a copy cat machine for the input data. The need of models that can work with limited data, as a human do, would be challenging (part of AGI).
  • Then prof talked about Has Machine Learning Become Alchemy? A practical analog of the medieval alchemists where he debunked that deep learning models are still working as blackbox, moreover, they are not even robust over one pixel perturbations. All Rahimi and LeCunn were discussing and settled that there is a trade-off between explainability and accuracy of the model, in other words, “Do we want more effective machine learning models without clear theoretical explanations, or simpler, transparent models that are less effective in solving specific tasks”

Image Source — Hacker, Philipp & Krestel, Ralf & Grundmann, Stefan & Naumann, Felix. (2020). Explainable AI under contract and tort law: legal incentives and technical challenges. Artificial Intelligence and Law. 28. 10.1007/s10506–020–09260–6.

Src: Nordic AI Meet presentation

  • Further, he talked about “The false hope of current approaches to explainable artificial intelligence in health care” (Ghassemi, M et al.) where they discuss criticism about present XAI approaches that researchers are coming up with, however, the metrics and methods start to fall with adversarial attacks and even forgetting. Further, prof mentioned that there are many features that your model sees, and with confirmation bias, do you think that the one model has picked for you, or you have picked is the one that lead to the true confusion matrix?

Image Source — Ghassemi, M., Oakden-Rayner, L., & Beam, A. L. (2021). The false hope of current approaches to explainable artificial intelligence in health care. The Lancet Digital Health, 3(11), e745-e750.

  • With the discussion moving towards adversarial attacks (different kinds of attacks), adversarial learning and perturbations, prof cited the paper “Adversarial attacks on medical machine learning — Emerging vulnerabilities demand new conversations” in his slides, to show a specific fact that “I don’t have problem with the model that classifies wrong with less confidence, challenge is with the model that classifies wrong with high confidence!”

Add alt text

Image Source — Finlayson, S. G., Bowers, J. D., Ito, J., Zittrain, J. L., Beam, A. L., & Kohane, I. S. (2019). Adversarial attacks on medical machine learning. Science, 363(6433), 1287–1289.

  • Moving on, he shared some references about “A deep-learning-based approach improves the speed, accuracy, and robustness of biomedical image reconstruction” and “On instabilities of deep learning in image reconstruction and the potential costs of AI”. In the latter article, Vegard A. et al come up with a testing mechanism for small level perturbations to worst perturbations, adversarial learning and subsampling with several combinations to make it robust, finally discussing about how is the instability related to the network architecture, training set, and also subsampling patterns.

Add alt text

Src: NORA AI MEET (Nordic AI MEET)

  • In last of his slides he talked about GDPR and other acts that can help auditing, regulating and democratising AI.

Add alt text

Src: Nordic AI Meet 2022

  • Part of which I could relate was “Hallucinations and detail transfer of AI” based on this talk and article where prof. Hansen talked about AI Generated Hallucinations in Deep Learning for inverse problems.

Add alt text

Src: https://youtu.be/FGGjaMEE3y0?t=307

Add alt text

Src: https://youtu.be/FGGjaMEE3y0?t=327

Author has quickly summarise the context of the theorem for Hallucinations in AI in this video: https://vimeo.com/771351791

  • He shared a reference in one of his slides to read about AI is Flawed — Here’s Why (look for tools for developing a Fair AI) and then he concluded the keynote with his own book “Compressive Imaging: Structure, Sampling, Learning” from the motivation “The Emperor’s New Mind”

Following the Keynote three talks and great presentations continued for which the references are followed:

  1. Bentsen, L. Ø., Warakagoda, N. D., Stenbro, R., & Engelstad, P. (2022, November). Probabilistic Wind Park Power Prediction using Bayesian Deep Learning and Generative Adversarial Networks. In Journal of Physics: Conference Series (Vol. 2362, №1, p. 012005). IOP Publishing.Chicago
  2. Roald, M., Schenker, C., Calhoun, V. D., Adali, T., Bro, R., Cohen, J. E., & Acar, E. (2022). An AO-ADMM approach to constraining PARAFAC2 on all modes. SIAM Journal on Mathematics of Data Science, 4(3), 1191–1222.
  3. NB-BERT-large is a general BERT-large model built on the large digital collection at the National Library of Norway. — https://huggingface.co/NbAiLab
  4. Olsen, L. H. B., Glad, I. K., Jullum, M., & Aas, K. (2022). Using Shapley Values and Variational Autoencoders to Explain Predictive Models with Dependent Mixed Features. Journal of Machine Learning Research, 23(213), 1–51. (https://github.com/LHBO/ShapleyValuesVAEAC)

About me:

I am Sanyam Jain, currently a student of Masters in Applied Computer Science’24 program at Østfold University College (HiØ), working as researcher with Stefano Nichele on Artificial General Intelligence (Open-Ended AI, ALife and CAs). Thanks to every component of NORDIC AI Meet Ecosystem and HiØ!

--

--

Sanyam Jain

I am masters student in Norway. Trying to spread my intellect with the community here :)