Privacy and security-related aspects of data mining and machine learning have been the topic of active research during the last few years, due to the existence of numerous applications with privacy and/or security requirements. Privacy issues have become a serious concern due to the collection, analysis and sharing of personal data by privately owned companies and public sector organizations for various purposes, such as data publishing or data mining. This has led to the development of privacy-preserving data mining and machine learning methods. More general security considerations arise in applications such as biometric authentication, intrusion detection and response, and malware classification. This has led to the development of adversarial learning algorithms, while parallel work in multi-agent settings and in low regret learning algorithms has revealed interesting interplays between learning and game theory.
Although significant research has so far been conducted, numerous theoretical and practical challenges remain. Firstly, several emerging research areas in data analysis (such as stream mining, mobility data mining, social network analysis), decision making and machine learning (such as fraud detection, intrusion detection and response), require new theoretical and applied techniques for the offering of privacy or security. Secondly, there is an urgent need for learning and mining methods with sufficient privacy and security guarantees for critical applications (i.e. biomedical, financial, mobility). Thirdly, there is an emerging demand for security applications such as biometric authentication, malware detection and spam filtering. Finally, large scale systems require data integration and linkage, information sharing and decision making in a secure and privacy-preserving manner over a wide network. Further research is required to provide scalable methodologies on very large datasets, with a large number of parties, for privacy and security applications. In all cases, the strong interconnections between data mining and machine learning, cryptography and game theory, create the need for the development of multidisciplinary approaches on adversarial learning and mining problems.
The aim of this workshop is to bring together scientists and practitioners who conduct cutting edge research on privacy and security issues in data mining and machine learning to discuss the most recent advances in these research areas, identify open problem domains and research directions, and propose possible solutions. We invite interdisciplinary research on cryptography, data mining, game theory, machine learning, privacy, security and statistics. Moreover, we invite mature contributions as well as interesting preliminary results and descriptions of open problems on emerging research domains and applications of privacy and security in data mining and machine learning.
The workshop invites original submissions in any of the following core subjects. For each subject we provide an indicative list of topics of interest.
- Data privacy and security issues.
- Privacy-preserving data publishing and anonymity.
- Privacy-aware data fusion, integration and record linkage.
- Privacy evaluation techniques and metrics.
- Auditing and query execution over private data.
- Privacy-aware access control.
- Theoretical aspects of machine learning for security
- Adversarial classification, learning and hypothesis testing.
- Learning in unknown and/or partially observable stochastic games.
- Special learning problems in security applications (i.e. learning with distribution shifts, semi-supervised learning, learning in large datasets).
- Distributed inference and decision making for security.
- Game-theoretic topics related to security applications.
- Privacy-preserving data mining, machine learning and
- Emerging research domains in privacy-preserving mining and learning (e.g., stream mining, social network analysis, graph analysis).
- Application-specific privacy preserving data mining and machine learning.
- Knowledge hiding approaches for privacy preserving learning and mining.
- Secure multiparty computation and cryptographic approaches.
- Statistical approaches for privacy preserving data mining.
- Security applications of machine learning.
- Cryptographic applications of machine learning.
- Intrusion detection and response.
- Biometric authentication, fraud detection.
- Statistical analysis and classification of malware.
- Spam filtering and captchas.
- Workshop: 24 September, 2010
- Christos Dimitrakakis, Goethe University of Frankfurt, Germany
- Aris Gkoulalas-Divanis, IBM Research Zurich, Switzerland
- Aikaterini Mitrokotsa, EPFL University, Switzerland
- Yucel Saygin, Sabanci University, Turkey
- Vassilios S. Verykios, University of Thessaly, Greece
Christos Dimitrakakis and Aikaterini Mitrokotsa are chairs for the areas of machine learning and security applications. Aris Gkoulalas-Divanis, Yucel Saygin and Vassilios S. Verykios are area chairs for privacy and privacy preserving data mining.
- Ulf Brefeld, Yahoo Research, Catalonia, Spain
- Michael Bruckner, University of Postdam, Germany
- Mike Burmester, Florida State University, FL, USA
- Kamalika Chaudhuri, University of California at San Diego, USA
- Peter Christen, Australian National University, Australia
- Chris Clifton, Purdue University, USA
- Maria Luisa Damiani, University of Milano, Italy
- Juan M. Estevez-Tapiador, University of York, UK
- Elena Ferrari, University of Insubria, Italy
- Dimitrios Kalles, Hellenic Open University, Greece.
- Murat Kantarcioglu, University of Texas at Dallas, USA
- Kun Liu, Yahoo! Labs, California, USA
- Daniel Lowd, University of Oregon, USA
- Grigorios Loukides, Vanderbilt University, USA
- Emmanuel Magkos, Ionian University, Greece
- Bradley Malin, Vanderbilt University, USA
- Mohamed Mokbel, University of Minnesota, USA
- Blaine Nelson, UC Berkeley, USA
- Ercan Nergiz, Sabanci University, Turkey
- Roberto Perdisci, Georgia Institute of Technology, USA
- Pedro Peris-Lopez, TU Delft, Netherlands
- Aaron Roth, Carnegie-Mellon University, USA
- Benjamin I. P. Rubinstein, University of California, USA
- Jianhua Shao, Cardiff University, UK
- Jessica Staddon, PARC, USA
- Angelos Stavrou, George Mason University, USA
- Grigorios Tsoumakas, Aristotle University of Thessaloniki, Greece
- Shobha Venkataraman, AT&T, USA
- Philip S. Yu, University of Illinois at Chicago, USA
PublicationLecture Notes in Aritificial Intelligence LNCS series.
The authors of the three best papers from the workshop that
are related to privacy (core themes 1 and 3) will be invited
to prepare a substantially revised and extended version of
their work for publication to the journal of Transactions on Data Privacy.
Authors of selected papers related to learning, games and security (themes 2 and 4) will be invited to prepare a substantially revised and extended version of their work for publication to the journal of IEEE transactions on dependable and secure computing.