2017 Australasian Database Conference


Keynotes:

Prof Mingsheng Ying, University of Technology Sydney

Bio: Mingsheng Ying was Cheung Kong Chair Professor at Tsinghua University and Director of the Scientific Committee, the National Key Laboratory of Intelligent Technology and Systems, China. Since 2008, he is Distinguished Professor and Research Director of the Centre for Quantum Software and Information, University of Technology Sydney, Australia. He is also Deputy Director for Research of the Institute of Software, Chinese Academy of Sciences.
Ying’s research interests include quantum computation and quantum information, programming theory, and logical foundations of artificial intelligence. In particular, he developed Hoare logic for quantum programs and proved its (relative) completeness (TOPLAS’11). He defined the notion of invariants for quantum programs (POPL’17). He initiated the research line of model checking quantum Markov chains (CONCUR’12-14, TOCL’14). Ying is the author of Foundations of Quantum Programming (Morgan Kaufmann 2016).


Title: Quantum Graph Reachability Problem

Abstract: Graph reachability is a fundamental problem in database theory and many other areas of computer science. In this talk, we consider quantum graph reachability problem, which originally arose in verification and analysis of quantum programs and model-checking quantum systems, but may also interest database community. We will discuss the following issues: 1. How we can naturally define a graph structure in the state Hilbert space of a quantum system from its (discrete-time) dynamics? 2. Why the approaches to classical graph reachability problem do not work for quantum reachability problem? 3. Strongly connected component decomposition theorem for quantum graphs. At the end of the talk, a series of open problems will be pointed out, including possible applications to database search in future quantum computers.



Dr Divesh Srivastava, AT&T Labs-Research

Bio: Divesh Srivastava is the head of Database Research at AT&T Labs-Research. He is a Fellow of the Association for Computing Machinery (ACM) and the managing editor of the Proceedings of the VLDB Endowment (PVLDB). He has served as a trustee of the VLDB Endowment, as an associate editor of the ACM Transactions on Database Systems (TODS), as an associate Editor-in-Chief of the IEEE Transactions on Knowledge and Data Engineering (TKDE), and as a general or program committee co-chair of many conferences. He has presented keynote talks at several international conferences, and his research interests and publications span a variety of topics in data management. He received his Ph.D. from the University of Wisconsin, Madison, USA, and his Bachelor of Technology from the Indian Institute of Technology, Bombay, India.


Title: Big Data Integration

Abstract: The Big Data era is upon us: data arebeing generated, collected and analyzed at an unprecedented scale, and data-driven decision making is sweeping through all aspects of society. Since the value of data explodes when it can be linked and fused with other data, addressing the big data integration (BDI) challenge is critical to realizing the promise of Big Data. BDI differs from traditional data integration in many dimensions: (i) the number of data sources, even for a single domain, has grown to be in the tens of thousands, (ii) many of the data sources are very dynamic, as a huge amount of newly collected data are continuously made available, (iii) the data sources are extremely heterogeneous in their structure, with considerable variety even for substantially similar entities, and (iv) the data sources are of widely differing qualities, with significant differences in the coverage, accuracy and timeliness of data provided. This talk presents techniques to address these novel challenges faced by big data integration, and identifies a range of open problems for the community.



Prof Masaru Kitsuregawa, National Institute of Informatics / University of Tokyo

Bio: Masaru Kitsuregawa received his Information Engineering Ph.D. degree from the University of Tokyo in 1983. Since then he joined the Institute of Industrial Science, the University of Tokyo, and is currently a professor. He is also a professor at Earth Observation Data Integration & Fusion Research Initiative of the University of Tokyo since 2010. He also serves Director General of National Institute of Informatics since 2013. Dr. Kitsuregawa's research interests include Database Engineering, and he had been principal researcher of Funding Program for World-Leading Innovative R&D on Science and Technology, MEXT Grant-in-Aids Program for "Info-Plosion", and METI's Information Grand Voyage Project. He had served President of Information Processing Society of Japan from 2013 to 2015. He served as a committee member for a number of international conferences, including ICDE Steering Committee Chair. He is an IEEE Fellow, ACM Fellow, IEICE Fellow and IPSJ Fellow, and he won ACM SIGMOD E.F.Codd Contriutions Award, Medal with Purple Ribbon, 21st Century Invention Award, and C&C Prize.


Title: Big Data for solving societal problem

Abstract: Big data has been focused since 2012 after Obama initiative. In Japan we have launched Info-plosion project much beforehand and tried to solve societal problems using big data, such as healthcare, disaster prevention, transportation etc. Will talk about 20 Peta Byte earth observation system and 200B records health care information system etc.



Distinguished Lecture:

Dr Guoliang Li, Tsinghua University

Bio: Guoliang Li is an Associate Professor of Department of Computer Science, Tsinghua University, Beijing, China. His research interests include crowdsourced data management, large-scale data cleaning and integration, and big spatio-temporal data analytics. He has regularly served as the PC members of many premier conferences, such as SIGMOD, VLDB, KDD, ICDE, WWW, IJCAI, and AAAI. He was a PC co-chair of WAIM 2014, WebDB 2014, NDBC 2016, and an area chair of CIKM 2016-2017. He is an associated editor of some premier journals, such as TKDE, Big Data Research, and FCS. He has published more than 80 papers in premier conferences and journals, such as SIGMOD, VLDB, ICDE, SIGKDD, SIGIR, ACM TODS, VLDB Journal, and TKDE. His papers have been cited more than 3600 times. He received IEEE TCDE Early Career Award, and best paper awards/nominations at DASFAA 2014 and APWeb 2014.


Title: Hybrid Human-Machine Big Data Integration

Abstract: Data integration cannot be completely addressed by automated processes. We proposed a hybrid human-machine method that harnesses human ability to address this problem. The framework first uses machine algorithms to identify possible matching pairs and then utilizes the crowd to compute actual matching pairs from these candidate pairs. In this talk, I will introduce our two systems on hybrid human-machine big data integration. (1) DIMA: A distributed in-memory system on big-data integration that can use SQL to integrate heterogenous data. DIAM can be used to identify candidate matching pairs in big data integration. (2) CDB: A crowd-powered database system that provides declarative programming interfaces and allows users to utilize an SQL-like language for posing crowdsourced queries. CDB can be used to refine the candidate pairs in big data integration.




Invited Talks for Special Track on Quantum Computing and Artificial Intelligence:

Prof. Man-Hong Yung, Southern University of Science and Technology

Bio: Professor Man-Hong Yung grew up and was educated in Hong Kong. He received a Bachelor degree of physics in 2002 and a Master's degree of physics in 2004 from the Chinese University of Hong Kong. In the summer of 2003, he went to the Institute for Quantum Information in the California Institute of Technology (Caltech) to learn the theory of quantum information. In 2004, he was invited to the Harvard University to participate in the projects on quantum information research. Then he went to the University of Illinois at Urbana-Champaign (UIUC) for the Ph.D. study, under the supervision of Nobel Laureates Prof. Anthony Leggett (Physics 2003). His PhD thesis is on the crossing field of physics and information science. After graduation, he went to the Harvard University as a postdoc fellow and mainly worked on quantum information and physical chemistry. During this period, he also served as a Visiting Scientist of MIT Center for Excitonics and Executive Board Member of MIT China energy and Environment Research Association. Professor Man-Hong Yung returned China in September 2013 and worked as an Assistant Professor at the Institute for Interdisciplinary Information Sciences of Tsinghua University. Then he left Tsinghua University and worked as an Associate Professor at the Department of Physics in the Southern University of Science and Technology in 2016. The research interests of Professor Man-Hong Yung mainly focus on the design of quantum algorithms and quantum simulation, and series of significant contributions have been made to this field. He has many publications as the first author, and corresponding author in the top journals of science, such as the Nature Photonics, Physical Review Letters, PNAS ,Nature Communications (4 papers) , Science Advances and so on. He was invited to write review articles for Ann. Rev. Phys. Chem and Advances in Chemical Physics. As one of the representative works, the study of quantum thermal insulation [Li & Yung (co-authorship) NJP 2014] was chosen by the IOP Publishing as the "IOP Select" and was selected by the New Journal of Physics as the "Highlights of 2014". In addition to hold the post of the reviewers as different magazines, Prof. Yung was invited to join the editorial board of Scientific Reports and review the articles in the field of quantum physics.


Title: Transforming Bell's Inequalities into State Classifiers with Machine Learning

Abstract: Quantum information science has profoundly changed the ways we understand, store, and process information. A major challenge in this field is to look for an efficient means for classifying quantum state. For instance, one may want to determine if a given quantum state is entangled or not. However, the process of a complete characterization of quantum states, known as quantum state tomography, is a resource-consuming operation in general. An attractive proposal would be the use of Bell's inequalities as an entanglement witness, where only partial information of the quantum state is needed. The problem is that entanglement is necessary but not sufficient for violating Bell's inequalities, making it an unreliable state classifier. Here we aim at solving this problem by the methods of machine learning. More precisely, given a family of quantum states, we randomly picked a subset of it to construct a quantum-state classifier, accepting only partial information of each quantum state. Our results indicated that these transformed Bell-type inequalities can perform significantly better than the original Bell's inequalities in classifying entangled states. We further extended our analysis to three-qubit and four-qubit systems, performing classification of quantum states into multiple species. These results demonstrate how the tools in machine learning can be applied to solving problems in quantum information science.



Prof. Zhengfeng Ji, University of Technology Sydney

Bio: Zhengfeng Ji is currently a Professor of the Centre for Quantum Software and Information (QSI), Faculty of Engineering and Information Technology (FEIT), University of Technology Sydney. He received BEng and PhD from the Department of Computer Science and Technology, Tsinghua University, Beijing, China in 2002 and 2007. His current research interests include quantum algorithms, quantum complexity theory and (post-)quantum cryptography.


Title: A Separability-Entanglement Classifier via Machine Learning

Abstract: The problem of determining whether a given quantum state is entangled lies at the heart of quantum information processing, which is known to be an NP-hard problem in general. Despite the proposed many methods such as the positive partial transpose (PPT) criterion and the k-symmetric extendibility criterion to tackle this problem in practice, none of them enables a general, effective solution to the problem even for small dimensions. Explicitly, separable states form a high-dimensional convex set, which exhibits a vastly complicated structure. In this work, we build a new separability-entanglement classifier underpinned by machine learning techniques. Our method outperforms the existing methods in generic cases in terms of both speed and accuracy, opening up the avenues to explore quantum entanglement via the machine learning approach.




Mr Ce Wang, Tsinghua University

Bio: Ce Wang is a PhD student of Professor Hui Zhai at Institute For Advanced Study, Tsinghua University. He graduated from Department of Physics, Tsinghua University in 2014.






Title: Machine Learning for Frustrated Classical Spin Models

Abstract: In this talk, we will apply the machine learning method to study classical XY model on frustrated lattices, such as triangle lattice and UnionJack lattice. The low temperature phases of these frustrated models exhibit both U(1) and Z2 chiral symmetry breaking, and therefore they are characterized by two order parameters, and consequently, two successive phase transitions as lowering the temperature. By using classical Monte Carlo to generate a large number of data to feed computer, we use methods such as the principle component analysis (PCA) to analyze these data. We find that the PCA method can distinguish all different phases and locate phase transitions, without prior knowledge of order parameters. Our work offers promise for using machine learning techniques to study sophisticated statistical models, and our results can be further improved by using principle component analysis with kernel tricks and the neural network method.




Dr Chris Ferrie, University of Technology Sydney

Bio: Dr Chris Ferrie is a Senior Lecturer in the Centre for Quantum Software and Information, University of Technology Sydney (UTS). He holds a UTS Chancellor’s Research Fellowship and an ARC DERCA Fellowship. He obtained his PhD from the Institute for Quantum Computing in 2012. His research is in the characterisation and control of quantum devices using machine learning. He is also the author of the book Quantum Physics for Babies.



Title: Gate sequence reuse in randomised benchmarking

Abstract: Quantum computing (applying quantum gates to quantum states) is so hot right now, but how good are your gates? Randomised benchmarking is an experimental technique aimed at estimating the average error in a set of gates. Due to its ease of implementation and propensity to produce favourable numbers, it is now the de facto standard. However, standard practice can lead to the wrong conclusions. Here I will use a simple analogy to convince you that one seemingly innocuous choice (sequence reuse) is not only problematic, but a symptom of a bigger problem. I will also provide a simple fix to this problem.




Dr Shenggang Ying, University of Technology Sydney

Bio: Dr Shenggang Ying received his PhD degree from Department of Computer Science and Technology at Tsinghua University in 2015, and now is a postdoctoral research associate with the UTS Centre for Quantum Software and Information. Sponsored by Prof. Mingsheng Ying, Shenggang started his research on reachability analysis of quantum Markov chains and quantum Markov decision processes during his PhD study. Since 2015, Shenggang extends his reach interest into quantum privacy-preserving data mining and machine learning, and developed algorithms for quantum privacy-preserving association rule mining. Recently he focuses on designing such quantum algorithms without the help of any quantum database or quantum oracle, which are much easier to implement and can manage real-world privacy-preserving tasks in the near future.



Title: Quantum Privacy-Preserving Perceptron

Abstract: With the extensive applications of machine learning, the issue of private or sensitive data in the training examples becomes more and more serious: during the training process, personal information or habits may be disclosed to unexpected persons or organisations, which can cause serious privacy problems or even financial loss. In this paper, we present a quantum privacy-preserving algorithm for machine learning with perceptron. There are mainly two steps to protect original training examples. Firstly when checking the current classifier, quantum tests are employed to detect data user's possible dishonesty. Secondly when updating the current classifier, private random noise is used to protect the original data. The advantages of our algorithm are: (1) it protects training examples better than the known classical methods; (2) it requires no quantum database and thus is easy to implement. Moreover, it can be simply generalized to deal with other machine learning tasks (for instance, association rule mining and decision tree learning) and preserve both parties' privacy.




Invited Talks for Special Track on Bioinformatics and Health Informatics:

A/Prof Yan Li, University of Southern Queensland

Bio: A/Prof Yan Li received her PhD degree from the Flinders University of South Australia, Australia. She is currently an Associate Professor in the School of Agricultural, Computational and Environmental Sciences at the University of Southern Queensland, Australia. Her research interests lie in the areas of Artificial Intelligence, Big Data Technologies, Internet Technologies, and Signal/Image Processing etc. So far A/Prof Yan Li has published 140 publications, supervised 11 PhD completions and is currently supervising 8 PhD students as the principal supervisor, and obtained about 2 million research grants. A/Prof Yan Li is the leader of USQ Data Science Program and the recipient of many research and teaching excellence awards, including 2012 prestigious National Learning and Teaching Citation Award, 2008 Queensland Smart State Smart Women Award, 2009 USQ Research Excellence Award, and 2015 and 2016 Research Publication Excellence Awards.






Dr Chen Li, Monash University

Bio: Dr Chen Li completed his PhD with A/Prof Ashley Buckle (NHMRC Senior Research Fellow) at the Department of Biochemistry and Molecular Biology and Monash Biomedicine Discovery Institute in 2016, where he focused on developing bioinformatic algorithms and database for analysing the relationship between protein structure, function and disease. After completing his PhD study, Dr Chen Li moved to the laboratories of Prof Anthony Purcell and Prof Jian Li at the Monash Biomedicine Discovery Institute, and have since extended his research into proteomics and immunopeptidomics. He has received a prestigious NHMRC CJ Martin Fellowship which allows him to undertake two-year postdoc training overseas at the Institute of Systems Biology, ETH, Switzerland. To date, Chen has published three protein-centric biological databases, including KinetochoreDB, PolyQ 2.0 and HIVed. In addition, Chen published a great variety of bioinformatic tools for protein structural and functional prediction. Now working with Tony and Jian, Chen continues to develop novel bioinformatics tools and knowledgebases for analysing proteomic and immunopeptidomic data to further advance the related field.




Dr Olivier Salvado, CSIRO

Bio: Dr Olivier Salvado Graduated with a Master in Electrical Engineering from ESIEE Paris and University Paris XII, where he studied signal processing, control system, and artificial intelligence. He worked for several years designing industrial control systems applying advanced techniques to improve automated machine performance. Attracted by medical challenges, he then graduated in 2006 with a PhD in Biomedical Engineering, specialty Medical Imaging, from Case Western Reserve University, Cleveland, OH, USA. His research was on developing machine learning technologies to analyses MRI data. He joined the radiology department of the Cleveland University Hospitals, working on catheter based MRI imaging, before moving to Australia as a research scientist for the CSIRO, focussing on image analysis. He is now leading the CSIRO Biomedical Informatics Group, based at the Australian eHealth Research Centre on the Royal Brisbane and Women’s Hospital campus in Brisbane. His Group develops innovative technologies to analyse medical data, including MRI, PET, biomarkers, and genetics. Dr Salvado has several adjunct positions with Australian universities, he is a regular assessors of scientific grants, a reviewers of international journal, and he is regularly involved in reviewing and organising international conferences. He graduated with an executive MBA from the Australian Graduate School of Management in 2014, and is honorary Professor at the University of Queensland.



PhD School Tutorials:

Dr Divesh Srivastava, AT&T Labs-Research

Bio: Divesh Srivastava is the head of Database Research at AT&T Labs-Research. He is a Fellow of the Association for Computing Machinery (ACM) and the managing editor of the Proceedings of the VLDB Endowment (PVLDB). He has served as a trustee of the VLDB Endowment, as an associate editor of the ACM Transactions on Database Systems (TODS), as an associate Editor-in-Chief of the IEEE Transactions on Knowledge and Data Engineering (TKDE), and as a general or program committee co-chair of many conferences. He has presented keynote talks at several international conferences, and his research interests and publications span a variety of topics in data management. He received his Ph.D. from the University of Wisconsin, Madison, USA, and his Bachelor of Technology from the Indian Institute of Technology, Bombay, India.


Title: Information theory for data management

Abstract: This tutorial explores the use of information theory as a tool to express and quantify notions of information content and information transfer for representing and analyzing data. We do so in an application-driven way, using a variety of data management applications, including database design, data integration and data anonymization.



Dr Gianluca Demartini, The University of Sheffield

Bio: Dr. Gianluca Demartini is a Senior Lecturer in Data Science at the University of Sheffield, Information School. His research is currently supported by the UK Engineering and Physical Sciences Research Council (EPSRC) and by the EU H2020 framework program. His main research interests are Information Retrieval, Semantic Web, and Human Computation. He received the Best Paper Award at the European Conference on Information Retrieval (ECIR) in 2016 and the Best Demo Award at the International Semantic Web Conference (ISWC) in 2011. He has published more than 70 peer-reviewed scientific publications including papers at major venues such as WWW, ACM SIGIR, VLDBJ, ISWC, and ACM CHI. He has given several invited talks, tutorials, and keynotes at a number of academic conferences (e.g., ISWC, ICWSM, WebScience, and the RuSSIR Summer School), companies (e.g., Facebook), and Dagstuhl seminars. He is an ACM Distinguished Speaker since 2015. He serves as area editor for the Journal of Web Semantics, as Student Coordinator for ISWC 2017, and as Senior Program Committee member for the AAAI Conference on Human Computation and Crowdsourcing (HCOMP), the International Conference on Web Engineering (ICWE), and the ACM International Conference on Information and Knowledge Management (CIKM). He is Program Committee member for several conferences including WWW, SIGIR, KDD, IJCAI, ISWC, and ICWSM. He was co-chair for the Human Computation and Crowdsourcing Track at ESWC 2015. He co-organized the Entity Ranking Track at the Initiative for the Evaluation of XML Retrieval in 2008 and 2009. Before joining the University of Sheffield, he was post-doctoral researcher at the eXascale Infolab at the University of Fribourg in Switzerland, visiting researcher at UC Berkeley, junior researcher at the L3S Research Center in Germany, and intern at Yahoo! Research in Spain. In 2011, he obtained a Ph.D. in Computer Science at the Leibniz University of Hanover focusing on Semantic Search.


Title: Crowdsourcing for Data Management

Abstract: In this session we will introduce the concept of micro-task crowdsourcing and human computation presenting examples of hybrid human-machine systems for entity linking, data integration and search. Such systems are examples of how the use of human intelligence at scale in combination with machine-based algorithms can outperform traditional data management systems. In this context, we will then discuss efficiency and effectiveness challenges of micro-task crowdsourcing platforms including spam, quality control, task assignment models, and job scheduling.



Prof Rui Zhang, The University of Melbourne

Bio: Dr Rui Zhang obtained his Bachelor's degree from Tsinghua University in 2001 and PhD from National University of Singapore in 2006. Before joining the University of Melbourne, he has been a visiting research scientist at AT&T labs-research in New Jersey and at Microsoft Research in Redmond, Washington. Since January 2007, he has been a faculty member in the Department of Computing and Information Systems at The University of Melbourne. Recently, he has been a visiting researcher at Microsoft Research Asia in Beijing regularly collaborating on his ARC Future Fellowship project. Dr Zhang's research interest is data and information management in general, particularly in areas of high-performance computing, spatial and temporal data analytics, moving object management, indexing techniques, data streams and sequence databases. His inventions have been adopted by major IT companies such as AT&T and Microsoft. In 2015, Dr Zhang has received the Chris Wallace Award by the Computing Research and Education Association of Australasia (CORE) for Outstanding Research in recognition of his significant contributions to the management and mining of spatiotemporal and multidimensional data. Please see representative projects Dr Zhang is leading on the page of Spatial and Temporal Data Analytics.


Title: Contextual Intent Tracking for Personal Assistants

Abstract: A new paradigm of recommendation is emerging in intelligent personal assistants such as Apple's Siri, Google Now, and Microsoft Cortana, which recommends "the right information at the right time" and proactively helps you "get things done". This type of recommendation requires precisely tracking users' contemporaneous intent, i.e., what type of information (e.g., weather, stock prices) users currently intend to know, and what tasks (e.g., playing music, getting taxis) they intend to do. Users' intent is closely related to context, which includes both external environments such as time and location, and users' internal activities that can be sensed by personal assistants. The relationship between context and intent exhibits complicated co-occurring and sequential correlation, and contextual signals are also heterogeneous and sparse, which makes modeling the context-intent relationship a challenging task. To solve the intent tracking problem, we propose the Kalman filter regularized PARAFAC2 (KP2) nowcasting model, which compactly represents the structure and co-movement of context and intent. The KP2 model utilizes collaborative capabilities among users, and learns for each user a personalized dynamic system that enables efficient nowcasting of users' intent. Extensive experiments using real-world data sets from a commercial personal assistant show that the KP2 model significantly outperforms various methods, and provides inspiring implications for deploying large-scale proactive recommendation systems in personal assistants.



Dr Junhao Gan, The University of Queensland

Bio: Dr. Junhao Gan currently is a Post-Doctoral Research Fellow in the School of Information Technology and Electrical Engineering (ITEE) at the University of Queensland (UQ). Before starting the postdoc appointment, he graduated as a PhD supervised by Prof. Yufei Tao in the School of ITEE at UQ in 2017. He obtained his bachelor and master degrees from School of Software, Sun Yat-Sen University, in 2011 and 2013, respectively. His research interests are to design practical algorithms with non-trivial theoretical guarantees for solving problems on massive data. He has published several papers at SIGMOD and TODS. He also won the Best-Paper Award at SIGMOD 2015.




Title: Euclidean DBSCAN Revisited: From Static to Dynamic

Abstract: DBSCAN is a highly successful density-based clustering method for multi-dimensional points. Its seminal paper won the Test-of-Time Award at KDD 2014, and it has over 9000 citations at Google Scholar. Although DBSCAN has received extensive applications, its computational hardness was unsolved until the recent work at SIGMOD 2015. This talk focuses on the problem of computing DBSCAN clusters on a set of n points in d-dimensional space from scratch (assuming no existing indexes) under the Euclidean distance. More specifically, we first show the DBSCAN problem is “hard” in three or higher dimensional space. Motivated by this, we propose a relaxed version of the problem called ρ-approximate DBSCAN, which returns the same clusters as DBSCAN, unless the clusters are “unstable”. The ρ-approximate problem is “easy” regardless of the constant dimensionality. This talk further investigate the algorithmic principles for dynamic clustering by DBSCAN. Surprisingly, we prove that the ρ-approximate version suffers from the very same hardness when the dataset is fully dynamic. We also show that this issue goes away as soon as tiny further relaxation is applied, yet still ensuring the same quality of ρ-approximate DBSCAN.



Dr Wen Hua, The University of Queensland

Bio: Dr Wen Hua is a Lecturer at the School of Information Technology and Electrical Engineering (ITEE), the University of Queensland. She received her PhD and bachelor degrees in Computer Science from Renmin University of China in 2015 and 2010, respectively. After completing her PhD study, she was appointed as a Postdoctoral Research Fellow at the University of Queensland. Her research interests include sensor data analytics, information extraction, data mining, and social media analysis. She has published papers as the main author in reputed journals and internal conferences such as SIGMOD, PVLDB, ICDE, TKDE, IJCAI, CIKM, WSDM, WWWJ, etc. She won the Best Paper Award in ICDE 2015, and she was also awarded the Advance Queensland Research Fellowship in 2017.


Title: Big Data Meets the Microgrid: Challenges and Opportunities

Abstract: A microgrid is a discrete energy system consisting of distributed energy sources (including demand management, storage, and generation) and loads capable of operating in parallel with, or independently from, the main power grid. It paves a way to effectively integrate various sources of distributed generation, especially Renewable Energy Sources (RES), and meanwhile provides a good solution for supplying power in case of an emergency by having the ability to change between islanded mode and grid-connected mode. On the other hand, control and protection are big challenges in this type of network configuration, which draws our attention to utilize data analytics techniques for enabling smarter control in the microgrid ecosystem. With an extensive range of sensors installed in the microgrid, it has been continuously generating streaming machine-to-machine (M2M) data to support dynamic optimization of power generation, consumption and storage. Microgrids bring significant new challenges and opportunities to data management and data analytics, from data acquisition, data quality control, data compression, data fusion, data disaggregation, data mining, and data prediction. In this talk we will introduce this emerging area to the data management community, with an emphasis on the challenges and some promising research topics on large scale mcirogrid data analytics.