Home » Big-Data Analytics - Developing methods for the analysis of large amounts of data without compromising data protection

Big-Data Analytics - Developing methods for the analysis of large amounts of data without compromising data protection

Date: 
01/08/2018 to 31/07/2021

Due to the progressive digitization of almost all areas of work and life and the associated new opportunities in terms of products and services, the topic of privacy has gained in importance. Trends such as personalized services and products, personalized medicine, as well as services that finance themselves through the sale of personalized advertising, or even the customer data, have come into the limelight. In addition, data protection has been strengthened in recent years, most prominently by the European General Data Protection Regulation (GDPR), implemented in Austria by the DSGVO, which comes into force from May 2018. Privacy efforts are counterbalanced by a variety of research and business interests based on the provision of personally identifiable information. The quality of this data is often essential. As has already been demonstrated in an academically understandable manner, anonymization processes generally distort the quality of the process and have a negative impact on quality. By contrast, the pseudonymization typically used as a substitute can no longer be used as a data protection measure under the GDPR. This project will therefore explore methods to assess and mitigate these negative effects on the results of big data analysis. Different framework conditions must be taken into account, each of which requires different methods, depending on whether the analyzes are trend analyzes or exact evaluations on a cohort or individual basis. An important aspect of the GDPR is informational self-determination: This includes the right of retroactive withdrawal of consent, as well as the right to transparency and ultimately the right to data deletion. Data subjects have the right to know which of their data is being used for what. Processes need to take this into account - a complex challenge in terms of intelligent algorithms. Therefore, methods are developed to ensure transparency without generating new threats to data protection, as well as methods for deleting data from complex data processing systems. To this end, the effects of data deletion on the results of machine learning algorithms are quantified.

Week: 
Monday, 10 December, 2018

News

On the event of the adoption of the draft regulation laying down measures for a high common level of cybersecurity at the institutions, bodies, offices and agencies of the Union, the AI4HealthSec project kicked off a process to provide its opinion.