The Dutch digital fraud detection system SyRI was announced to set up to detect social security fraud quickly and effectively and by doing so, maintain support for the social security system. It was the formal position that for the sake of effectiveness, no information about the algorithm and very limited information about the application of the system should be shared. The authors argue on the basis of a policy analysis, a legal exploration and a literature study that the lack of transparency about the chosen method and the application of the digital fraud detection system in social security can have far-reaching consequences for both the individual and society . The information sharing and the use of algorithms can lead to suspicion of and declining confidence in the government, and a reduced motivation to comply with the prevailing rules. This could undermine the original purpose. |
Artikel |
Hoe SyRI het belang van transparantie onderstreept |
Tijdschrift | Beleid en Maatschappij, Aflevering 3 2021 |
Trefwoorden | SyRI, digitisation, transparency, trust, ICT |
Auteurs | Tosja Selbach en Barbara Brink |
SamenvattingAuteursinformatie |
Thema-artikel |
Verantwoorde algoritmisering: zorgen waardengevoeligheid en transparantie voor meer vertrouwen in algoritmische besluitvorming? |
Tijdschrift | Bestuurskunde, Aflevering 4 2020 |
Trefwoorden | algorithms, algorithmization, value-sensitivity, transparency, trust |
Auteurs | Dr. Stephan Grimmelikhuijsen en Prof. dr. Albert Meijer |
SamenvattingAuteursinformatie |
Algorithms are starting to play an increasingly prominent role in government organizations. The argument is that algorithms can make more objective and efficient decisions than humans. At the same time, recent scandals have highlighted that there are still many problems connected to algorithms in the public sector. There is an increasing emphasis on ethical requirements for algorithms and we aim to connect these requirements to insights from public administration on the use of technologies in the public sector. We stress the need for responsible algorithmization – responsible organizational practices around the use of algorithms – and argue that this is needed to maintain the trust of citizens. We present two key components of responsible algorithmization – value-sensitivity and transparency – and show how these components connect to algorithmization and can contribute to citizen trust. We end the article with an agenda for research into the relation between responsible algorithmization and trust. |
Thema-artikel |
|
Tijdschrift | Bestuurskunde, Aflevering 4 2020 |
Auteurs | Dr. Haiko van der Voort en Joanna Strycharz Msc |
Auteursinformatie |
Thema-artikel |
Een transparant debat over algoritmen |
Tijdschrift | Bestuurskunde, Aflevering 4 2020 |
Trefwoorden | AI, ethics, Big Data, human rights, governance |
Auteurs | Dr. Oskar J. Gstrein en Prof. dr. Andrej Zwitter |
SamenvattingAuteursinformatie |
The police use all sorts of information to fulfil their tasks. Whereas collection and interpretation of information traditionally could only be done by humans, the emergence of ‘Big Data’ creates new opportunities and dilemmas. On the one hand, large amounts of data can be used to train algorithms. This allows them to ‘predict’ offenses such as bicycle theft, burglary, or even serious crimes such as murder and terrorist attacks. On the other hand, highly relevant questions on purpose, effectiveness, and legitimacy of the application of machine learning/‘artificial intelligence’ drown all too often in the ocean of Big Data. This is particularly problematic if such systems are used in the public sector in democracies, where the rule of law applies, and where accountability, as well as the possibility for judicial review, are guaranteed. In this article, we explore the role transparency could play in reconciling these opportunities and dilemmas. While some propose making the systems and data they use themselves transparent, we submit that an open and broad discussion on purpose and objectives should be held during the design process. This might be a more effective way of embedding ethical and legal principles in the technology, and of ensuring legitimacy during application. |
Artikel |
|
Tijdschrift | Beleid en Maatschappij, Aflevering 3 2020 |
Trefwoorden | dirty data, predictive policing, CAS, discrimination, ethnic profiling |
Auteurs | Mr. Abhijit Das en Mr. dr. Marc Schuilenburg |
SamenvattingAuteursinformatie |
Predictive tools as instruments for understanding and responding to risky behaviour as early as possible are increasingly becoming a normal feature in local and state agencies. A risk that arises from the implementation of these predictive tools is the problem of dirty data. The input of incorrect or illegally obtained information (‘dirty data’) can influence the quality of the predictions used by local and state agencies, such as the police. The article focuses on the risks of dirty data in predictive policing by the Dutch Police. It describes the possibilities to prevent dirty data from being used in predictive policing tools, such as the Criminality Anticipation System (CAS). It concludes by emphasizing the importance of transparency for any serious solution looking to eliminate the use of dirty data in predictive policing. |
Artikel |
|
Tijdschrift | Beleidsonderzoek Online, oktober 2019 |
Auteurs | Frans L. Leeuw |
SamenvattingAuteursinformatie |
Overheidsbeleid heeft steeds meer te maken met digitalisering en data-ificering van de samenleving en het menselijk gedrag. Dat betekent uitdagingen voor beleidsevaluatoren. In dit artikel gaat het om éen van de daarmee gepaard gaande verschijnselen: Big Data en Artificiële Intelligentie (BD/AI). Het artikel stelt, na erop gewezen te hebben dat de evaluatieprofessie langere tijd niet erg actief op digitaal gebied is geweest, ten eerste de vraag wat BD/AI te bieden hebben aan evaluatieonderzoek van (digitaal) beleid. Vijf toepassingsmogelijkheden worden besproken die de kwaliteit, bruikbaarheid en relevantie van evaluatieonderzoek kunnen bevorderen. De tweede vraag is wat evaluatieonderzoek te bieden heeft, als het gaat om het analyseren/onderzoeken van de betrouwbaarheid, validiteit en enkele andere aspecten van Big Data en AI. Ook daar worden verschillende mogelijkheden (en moeilijkheden) geschetst. Naar het oordeel van de schrijver is het enerzijds dienstig (meer) gebruik te maken van BD/AI in evaluatieonderzoek, maar doen onderzoekers er ook goed aan (meer) aandacht uit te laten gaan naar: de assumpties die aan BD/AI ten grondslag liggen (inclusief het ‘black box’-probleem); de validiteit, veiligheid en geloofwaardigheid van algoritmes; de bedoelde en onbedoelde consequenties van het gebruik ervan; én de vraag of de claims dat digitale interventies die mede gebaseerd zijn op BD/AI effectief (of effectiever zijn dan andere), onderbouwd en valide zijn. |
Artikel |
Wilde data: over de sociale gevolgen van Big, Open, en Linked Data systemen |
Tijdschrift | Bestuurskunde, Aflevering 1 2016 |
Trefwoorden | BOLD, autonomic computing, social consequences technology |
Auteurs | Dr. Dhoya Snijders |
SamenvattingAuteursinformatie |
This article focuses on the question how Big, Open and Linked Data systems (BOLD) are shifting human-data relations. BOLD is creating a new type of society which is both data-focused and data-driven. Both governments and citizens are measuring, analyzing and verifying data and acting upon these types of analyses. As BOLD is itself becoming intelligent, the process of collecting, linking, and analyzing data is no longer merely the domain of humans. Machine-learning is picking up speed and algorithmic accuracy is being maximized as data are becoming more complex and unpredictable its output. Both citizens and governments will increasingly have to deal with non-human actors in the form of intelligent data-driven systems. By referring to literature on human-animal relations this article makes the argument that data systems are gaining autonomy and a certain level of wildness. As such systems are mediating human relations, the article argues that social relations are shifting to becoming triad relationships in which intelligent information systems are a significant actor. |