Big-Data Analytics - Developing methods for the analysis of large amounts of data without compromising data protection

Home » Big-Data Analytics - Developing methods for the analysis of large amounts of data without compromising data protection
01/08/2018 to 31/07/2021

Due to the progressive digitization of almost all areas of work and life and the associated new opportunities in terms of products and services, the topic of privacy has gained in importance. Trends such as personalized services and products, personalized medicine, as well as services that finance themselves through the sale of personalized advertising, or even the customer data, have come into the limelight. In addition, data protection has been strengthened in recent years, most prominently by the European General Data Protection Regulation (GDPR), implemented in Austria by the DSGVO, which comes into force from May 2018. Privacy efforts are counterbalanced by a variety of research and business interests based on the provision of personally identifiable information. The quality of this data is often essential. As has already been demonstrated in an academically understandable manner, anonymization processes generally distort the quality of the process and have a negative impact on quality. By contrast, the pseudonymization typically used as a substitute can no longer be used as a data protection measure under the GDPR. This project will therefore explore methods to assess and mitigate these negative effects on the results of big data analysis. Different framework conditions must be taken into account, each of which requires different methods, depending on whether the analyzes are trend analyzes or exact evaluations on a cohort or individual basis. An important aspect of the GDPR is informational self-determination: This includes the right of retroactive withdrawal of consent, as well as the right to transparency and ultimately the right to data deletion. Data subjects have the right to know which of their data is being used for what. Processes need to take this into account - a complex challenge in terms of intelligent algorithms. Therefore, methods are developed to ensure transparency without generating new threats to data protection, as well as methods for deleting data from complex data processing systems. To this end, the effects of data deletion on the results of machine learning algorithms are quantified.

Monday, 10 December, 2018


On 18 July 2019, will organize a webinar in collaboration with the GDPR Cluster projects entitled “GDPR compliance in the emerging technologies”.

Future Events

The 14th International Conference on Availability, Reliability and Security (ARES 2019), will be held from August 26 to August 29, 2019 at the University of Kent, Canterbury, UK.

26/08/2019 to 29/08/2019

PROTECTIVE is co-organising the 2nd International Workshop on Cyber Threat Intelligence Management(CyberTIM 2019) as apart of the ARES 2019 conference in the UK on 26-29 August 2019.

26/08/2019 to 29/08/2019