Download the above schedule as a .pdf file.
Following is the list of confirmed speakers. Apart from
these, 20 German and 10 Indian students would make presentations.
Ulrich Meyer is a
full professor of Algorithm Engineering at Goethe University Frankfurt.
He obtained his Ph.D. in Computer Science from the Saarland University
Saarbrücken in 2002. Before joining Goethe University in 2007 he was a
Postdoc, Research Associate, and senior researcher (at the level of
Associate Professor) at Max-Planck Institute for Computer Science. He
has also been a Visiting Assistant Professor at Duke University and a
Guest Researcher at the Hungarian Academy of Sciences.
His main research area is algorithms for advanced models of computation
(e.g. parallel computing and memory hierarchies) with a focus on graph
He is currently coordinating the German Research Cluster on Algorithms
for big data, funded by the German Research Foundation (DFG).
Goethe University Frankfurt am Main, Germany
Traversal and generation of very large graphs in memory hierarchies.
Very large graphs arise naturally in such diverse domains as social
networks, web search, computational biology, machine learning and more.
Even simple tasks like traversing these graphs become challenging once
they cannot be stored in the main memory of one machine. If the graph is
stored in external memory, then many algorithms that perform very well
on graphs that are stored in internal memory, become inefficient because
of the large number of I/Os they incur. In order to alleviate the I/O
bottleneck, a number of external memory graph traversal algorithms have
been designed with provable worst-case guarantees. In the talk I
highlight some techniques used in the design and engineering of such
algorithms and survey the state-of-the-art in I/O-efficient graph
In the engineering process mentioned above, artificially generated input
graphs play an important role for systematic testing and tuning. In big
data settings, however, not only processing huge graphs but also the
efficient generation of appropriate test instances itself becomes
challenging. Hence, I will also discuss two recent results for
large-scale graph generation under resource constraints.
Anand Srivastav [Biography]
Anand Srivastav studied Mathematics and Physics at the University of Muenster, Germany from 1978 -- 1984. He completed the PhD in Mathematics with a thesis in Functional Analysis at the University of Muenster in Spring 1988. Thereafter, he moved joined the Research Institute of Discrete Mathematics, Uiversity of Bonn, from 1988 -- 1993. After academic stays as a visitig faculty at the University of Minnesota, Yale University and New York University, he joined the Free University of Berlin in 1994, and in 1995 the Humboldt University of Berlin. In Berlin he completed his Habilitation in Computer Science with a thesis on "Derandomzied Algorithms in Combinatorial Optimziation" in 1996. Since 1997 he is professor and chair for Discrete Optimization at Kiel University. From 2000 -- 2005 he has been the speaker of the DFG research training group 357 ("Graduiertekolleg") "Efficient Algorithms and Multiscale Methods" at Kiel University.
Since 2007 he is PI in the DFG cluster of excellence "The Future Ocean" and since 2010 he is speaker of the research platforms of the cluster. In 2013 he held the guest professorship at the Indo-Max-Planck Center for Computer Science at IIT Delhi. His research interests are combinatorial optimization, discrepancy theory, randomized and derandomized algorithms, algorithm engineering and big data applications in marine science and life science, with a personal emphasis on the Indo-German cooperation in computer science and mathematics
Kiel University, Germany
Computing Eulerian Tours in Streaming Models
We present new algorithms for the computation of Eulerian tours in form of iis successor function in graphs in the semi-streaming model in optimal one-pass complexity, and also briefly sketch the sorting complexity. We introduce a new technique representating merging of tours in algebraic form.
Dorothea Wagner heads the Institute of Theoretical Informatics at the Karlsruhe
Institute of Technology. She earned her PhD 1986 from RWTH Aachen, and
her habilitation 1992 from TU Berlin. Her research interests are in the
field of graph algorithms and algorithm engineering with a focus on
traffic optimization, social network analysis and network visualization.
In 2012 she received the Google Focused Research Award for the project
""Next Generation Route Planner"". She is a member of several editoral
boards and scientific advisory committees. From 2007 to 2014 she was
vice president of the German Research Foundation (DFG). Since 2015 she
is a member of the German Council of Science and Humanities.
Karlsruhe Institut für Technologie, Germany
Algorithmic Challenges in Power Grids
Amit Kumar is currently Jaswinder and Tarwinder Chadha chair professor in the department of Computer Science and Engineering at IIT Delhi. He holds a Bachelors degree from IIT Kanpur and PhD from Cornell University. Prior to joining IIT Delhi in 2003, he worked as a member of technical staff at Bell Laboratories, New Jersey and has held several visiting professor positions at Microsoft Research India, IBM Research India and Max-Planck Institute, Germany. He has received INAE (Indian National Academy of Engineer) Young Engineer Award, INSA (Indian National Science Academy) Young Scientist Award, IBM Faculty Award and was Max-Planck-India partner group research fellow during 2005-09. He has been selected for the Shanti Swarup Bhatnagar Award, 2018 for Mathematical Sciences.
Online algorithms for packing and covering problems
Online algorithms are used to model problems where the input is revealed over time and the algorithm needs to make irrevocable decisions. This is typical of scenarios where a large amount of data needs to be handled, e.g., a data center may receive millions of job requests online and would like to decide how to service them using a small number of machines. In this talk, I shall discuss two problems which have applications in such settings, namely online bin packing, and online set cover. I shall present some recent models for these problems which study the trade-off between the quality of the solution and the extent to which the notion of "irrevocability" can be violated.
Christopher L. Barrett is Professor and Executive Director of the Biocomplexity Institute and Initiative at the University of Virginia. His research interests are in large multiscale, high-performance modeling and simulation systems grounded in computational and information sciences spanning mathematical, biological, psychological and social sciences. Additionally, Dr. Barrett’s interests include theoretical and applied research in natural and engineered intelligent systems. He is the recipient of the 2012–2013 Jubilee Professorship in Computer Science and Engineering at Chalmers University in Sweden and is a member of the 2010 Royal Colloquium for the King of Sweden. He has received Distinguished Research, Service, Advisory and Security Awards from the U.S. Navy, Los Alamos National Laboratory, and the Alliance for Transportation Research. He has served as advisor to U.S. government agencies, the Commonwealth of Virginia, the European Commission and others.
University of Virginia, USA
Samik Datta is a Principal Data Scientist in Flipkart, India. He had joined Flipkart in the year 2014 as a Data Scientist. Prior to this he has served Bell Laboratories as a Member of Technical Staff from 2008 to 2014 and also as a Software development engineer with Microsoft from 2005 - 2006.
He received his M.Tech Computer sCience and Engineering from IISc in 2008 and holds a B.E. Computer science and engineering from Jadhavpur University.
Recurrent Recommendations in Flipkart
We study the problem of recurrent recommendation in the backdrop of
e-commerce in an emerging market. We begin with a comprehensive survey of
the relevant literature that embraces a temporal point process-based
formalism to model the dynamics and comment on the recent trends, such as a
shift in focus from explicit axiomatic construction of the conditional
intensity function to an implicit definition via RNN. We, next, carefully
evaluate each of these alternatives in light of the idiosyncrasies
presented by an emerging e-commerce business: considerations ranging from
the availability of doubly-censored event sequences to a heavy skew in the
inter-purchase interval distributions. To alleviate these challenges as
well as to surmount the scale, we present a practical approach that employs
robust plug-in estimators and computes the recommendations with a scalable
implementation of the convolution operator. We close with an account of the
lessons learn during the design and implementation of a recurrent
recommender system for Flipkart Supermart - the new grocery wing of India's
largest e-commerce marketplace.
Soumen Chakrabarti is a Professor of Computer Science at IIT Bombay.
Soumen Chakrabarti received his B.Tech in Computer Science from the Indian Institute of Technology, Kharagpur, in 1991 and his M.S. and Ph.D. in Computer Science from the University of California, Berkeley in 1992 and 1996. At Berkeley he worked on compilers and runtime systems for running scalable parallel scientific software on message passing multiprocessors.
He was a Research Staff Member at IBM Almaden Research Center from 1996 to 1999, where he worked on the Clever Web search project and led the Focused Crawling project. In 1999 he joined the Department of Computer Science and Engineering at the Indian Institute of Technology, Bombay, where he was Associate Professor during 2003--2014, and Professor since then. In 2004 he was Visiting Associate professor at Carnegie-Mellon University. During 2014--2016 he was Visiting Scientist at Google.
He has published in the WWW, SIGIR, SIGKDD, EMNLP, SIGMOD, VLDB, ICDE, SODA, STOC, SPAA and other conferences as well as Scientific American, IEEE Computer, VLDB and other journals. He won the best paper award at WWW 1999. He was coauthor on the best student paper at ECML 2008. His work on keyword search in databases got the 10-year influential paper award at ICDE 2012. He won the Bhatnagar Prize in 2014. He is fellow of Indian National Academy of Engineering (INAE), the Indian Academy of Sciences (IAS), and the Indian National Science Academy (INSA). He holds eleven patents on Web-related inventions. He is also author of one of the earliest books on Web search and mining.
He has served as technical advisor to search companies and vice-chair or program committee member for WWW, SIGIR, SIGKDD, VLDB, ICDE, SODA and other conferences, and guest editor or editorial board member for Foundations and Trends in Information Retrieval, DMKD and TKDE journals. He has served as program chair for WSDM 2008 and WWW 2010.
IIT Bombay, Mumbai, India
Learning New Type Representations from Knowledge Graphs
Beyond words, continuous representations of entities and relations have led to large recent improvements in inference of facts in knowledge bases, as well as applications like question answering. Comparatively less has been done about modeling types and their associated relations (is-instance-of and is-subtype-of). In the first part of the talk, I will present a new representation of types as hyper-rectangles rather than points, which are commonly used to embed words and entities. I will propose an elementary loss function representing rectangle containment. I will also demonstrate that recent work on type representation has used a questionable evaluation protocol, and propose a sound alternative. Experiments using type supervision from the WordNet noun hierarchy show the superiority of our approach. In the second part of the talk, I will move to unsupervised discovery of type representation. The idea is to represent each entity using a type and a residual vector. Each relation is represented by two type-checking vectors and an entity-to-entity compatibility checking vector. We do not use any supervision from KG schema to guide the type (checking) embeddings. Experiments on FB15k and YAGO show two benefits. First, inferring new triples becomes more accurate, exceeding state of the art. Second, the type embeddings are very good predictors of KG types to which the entities belong, although this information was not available during training.
Sunita Sarawagi researches in the fields of databases, data mining, and machine learning. Her current research interests are deeplearning, graphical models and information extraction. She isinstitute chair professor at IIT Bombay. She got her PhD in databases from the University of California at Berkeley and a bachelors degree from IIT Kharagpur. Her past affiliations include visiting faculty at Google Research, Mountain view, CA, visiting faculty at CMU Pittsburg, and research staff member at IBM Almaden Research Center. She has several publications in databases and data mining and several patents.
She serves on the board of directors of ACM SIGKDD and VLDB foundation. She was program chair for the ACM SIGKDD 2008 conference, research track co-chair for the VLDB 2011 conference and has served as program committee member for SIGMOD, VLDB, SIGKDD, ICDE, and ICML conferences. She is/was on the editorial board of the ACM TODS, ACM TKDD, and FnT for machine learning journals.
IIT Bombay, Mumbai, India
Training and Inference of Sequence Prediction Models: the old and the new
Early sequence predictions models such as Conditional Random Fields (CRFs) made conditional independence assumptions among tokens in the predicted sequence and yielded convex training objectives. In contrast modern deep-learning based sequence prediction models assume full dependency and non-convex training. In this talk we will contrast the algorithmic challenges in training and inference of sequence prediction models as we go from CRFs to modern deep-learning models.
Devdatt Dubhashi is a Professor at Department of Computer Science and Engineering, Chalmers University of Technology, Sweden.
He received his BTech in Computer Science and Engineering from IIT Delhi and Masters and PhD degrees from Cornell University, US. He leads the Algorithms and Machine Learning Group at Chalmers and coordinates BigData@Chalmers. His main interests are in machine learning, the design and analysis of randomized algorithms, and computational biology.
Chalmers University of Technology, Sweden
Basic Concepts of Large Scale Optimization for Machine Learning
We will give a rapid overview of some key concepts in first order optimization methods that have played a very important role in modern machine learning algorithms. In particular we will cover methods for stochastic optimization for so called finite sum problems, giving a number of concrete examples to illustrate the concepts.
Guy Even is a Professor in Dept. of Electrical Engineering - Systems at Tel Aviv University.
His is well known for his research in Approximation algorithms for NP Complete problems related to VLSI; computer arithmetic; design of floating point units; systolic arrays.
Guy Even played a neat trick on Google. Find out what Google thinks of Guy Even http://hyde.eng.tau.ac.il/google.html
In some photos on his webpage, there are many guys and Even Guy is there too.
Tel Aviv University, Israel
On the design of adaptive filters
An approximate set membership data structure supports inserts, deletes, and membership queries. The data structure, called a filter, is approximate in the sense that it may answer "yes" to queries not in the set, but the probability of a false positive event is bounded by parameter. The challenges in designing filters are to reduce space, number of memory accesses, and repeated false positive events. I will overview the main components of an adaptive filter.
completed his B.Tech from IIT Kanpur in 1987 and MS and PhD from the
University of Texas at Austin in 1992. He was an Assistant Professor in
Computer Science at Rutgers University for a few years before joining
IIT Kanpur in 1997. He has worked in parallel and deductive database
systems, query optimization and now works in data stream algorithms.
IIT Kanpur, India
Sketching and Sampling for Numerical Linear Algebra
A large class of “Big Data” analysis use matrix data and leverage basic routines of numerical linear algebra, optimization and graph algorithms. In this talk, we will talk of recent advances in algorithms for numerical linear algebra and optimization that have come from the technique of linear sketching and sampling. In sketching, dimensionality reduction is achieved by multiplying the given matrix by a random matrix such that certain key properties are preserved. This approximate preservation of solutions allows us to run the original computation on the reduced randomized matrix, thereby accelerating the solution for the original problem. We will consider least squares problem, low rank approximation of matrices and applications to optimization.
John Iacono is a Professor in the Algorithms group in the Université libre de Bruxelles, in Brussels, Belgium, and a Research Professor at the Tandon School of Engineering of New York University, in Brooklyn, USA. He obtained a Bachelor and a Master of Science, Computer Science from Stevens Institute of Technology in Hoboken, USA in 1996. John received his PhD in Computer Science from Rutgers, The State University of New Jersey — New Brunswick in 2001. John's research interests are in Data Structures and Computational Geometry. John has received an Alfred P Sloan Fellowship, a Fulbright Research Fellowship, and the best paper award at the 2014 European Symposium on Algorithms and the best paper award at the 2002 International Symposium on Algorithms and Computation.
Université libre de Bruxelles, Belgium
Modern computers exhibit a strong preference for locality of preference whereby accessing data that is close to something that is recently accessed is much faster than accessing randomly located data. In this talk I will describe several data structures for fundamental problems whose design principle is to maximise locality of reference. These structures are usually analysed in the cache-oblivious model, but I will also try to draw direct connections between these structures and locality, as well as between the cache-oblivious model and locality.
Giuseppe ( Pino ) Italiano
Giuseppe Italiano is a Professor of Computer Science at LUISS University. After getting a Ph.D. in Computer Science at Columbia University, Giuseppe worked as a Research Staff Member at the IBM T.J. Watson Research Center in Yorktown Heights (New York). When he was 33 years old, he won a National competition for Full Professor and went back to Italy. Before joining LUISS University, he was professor of computer science at University of Salerno, University “Ca’ Foscari” of Venice, and University of Rome “Tor Vergata”. He has worked on a wide variety of problems, producing highly-cited results on a number of different areas. Most of his research is centered around the design, analysis and engineering of algorithms for big data sets, with applications to several areas, including graph theory, social network analysis, computer and network security, and computational biology.
His background, which also includes a five-year experience in industrial research, puts him in a unique position to carry out work that combines basic research with strong focus on applications.
LUISS University, Italy
2-Connectivity in Directed Graphs
We survey some recent theoretical and experimental results on 2-edge and 2-vertex connectivity in directed graphs. Despite being complete analogs of the corresponding notions on undirected graphs, in digraphs 2-connectivity has a much richer and more complicated structure. For undirected graphs it has been known for over 40 years how to compute all bridges, articulation points, 2-edge- and 2-vertex-connected components in linear time, by simply using depth first search. In the case of digraphs, however, the very same problems have been much more challenging and have been tackled only recently.
Riko Jacob is an Associate Professor for Algorithm Engineering in the Theoretical Computer Science Section of IT University of Copenhagen, and a Member of the Algorithms Group.
In 1997 Riko received his Diplom (roughly a MSc) in Computer Science in Würzburg (Germany) and in 2002 PhD from Aarhus University (Denmark). Riko worked as a researcher at Los Alamos National Lab (USA), LMU Munich (Germany), TU Munich (Germany), and ETH Zurich (Switzerland).
IT University of Copenhagen, Denmark
Models for Algorithms on Big Data
Models of computation play a fundamental role in the way we think and argue about algorithms, complexities and lower bounds. In this talk I will present some established models of parallel computation that take local memory into account, which makes them particularly relevant for working with big data.
Madhav Marathe is a Division Director of the Computing, Simulations and Network Science Division at the Biocomplexity Institute and a Professor of Computer Science. From January 2005-September 2018, he was a Professor of Computer Science at Virginia Tech. Concurrently, at Virginia Tech, he was the Deputy Director (2005-2014) and then the Director (2014-2018) of the Network Dynamics and Simulation Science Laboratory at the Biocomplexity Institute of Virginia Tech. He obtained his Bachelor of Technology degree in 1989 in Computer Science and Engineering from the Indian Institute of Technology, Madras, and his Ph.D. in 1994 in Computer Science from the University at Albany -SUNY, under the supervision of Professors Harry B. Hunt III and Richard E. Stearns. Before coming to Virginia Tech in 2005, he worked in the Basic and Applied Simulation Science group (CCS-5) in the Computer and Computational Sciences division at Los Alamos National Laboratory where he was team leader in a theory-based, advanced simulation program to represent, design, and analyze extremely large socio-technical and critical infrastructure systems.
He holds adjunct appointments at the Chalmers University and the Indian Institute of Public Health.
He has published more than 300 research articles in peer reviewed journals, conference proceedings, and books, and has over 20 years of experience in project leadership and technology development. He was the 2011 Inaugural George Michael Distinguished Scholar at the Lawrence Livermore National Laboratory. In 2014 he became an Association of Computing Machinery (ACM) Fellow for contributions to high performance computing algorithms and software environments for simulating and analyzing socio-technical network science. Also in 2013, he was named an Institute of Electrical and Electronic Engineers (IEEE) Fellow for contributions to socio-technical network science. In 2014, he was named an AmericanAssociation for Advancement of Science (AAAS) Fellow for contributions to high performance computing algorithms and software environments. In 2018 he was named a Society for Industrial and Applied Mathematics (SIAM) Fellow for contributions to high performance computing algorithms and software systems for network science and public health epidemiology. That same year he was awarded the Virginia Tech College of Engineering Dean’s Award for Excellence in Research.
University of Virginia, USA
Massively interacting socio-technical networks: HPC-enable pervasive, personalized and precision analytics
Reasoning about real-world social habitats often represented as multiplexed co-evolving networks is complicated and scientifically challenging due to their size, co-evolutionary nature and the need for representing multiple dynamical processes simultaneously. The 2014 Ebola epidemic, 2009 financial crisis, global migration, societal impacts of natural and human initiated disasters and the effect of climate change provide examples of the many challenges faced when developing such environments. Advances in computing has fundamentally altered how such networks can be synthesized, analyzed and reasoned.
The talk will focus on foundations and advanced technologies to study co-evolving socio-technical networks with the aim of developing scalable and practical decision support systems. We will draw on our work in urban transport planning, national security and public health epidemiology to guide the discussion. We will conclude with directions for future research and open problems.
Andrew McGregor is an Associate Professor at the University of Massachusetts, Amherst. He received a B.A. degree and the Certificate of Advance Study in Mathematics from the University of Cambridge and a Ph.D. from the University of Pennsylvania.
He also spent a couple of years as a post-doc at UC San Diego and Microsoft Research SVC. He is interested in many areas of theoretical computer science and specializes in data stream algorithms, linear sketching, and communication complexity. He received the NSF Career Award in 2010.
UMass, Amherst, USA
Analyzing Massive Graph via Streams and Sketches
Over the last decade, there has been considerable interest in designing algorithms for processing massive graphs in the data stream model. The original motivation was two-fold: a) in many applications, the dynamic graphs that arise are too large to be stored in the main memory of a single machine and b) considering graph problems yields new insights into the complexity of stream computation. However, the techniques developed in this area are now finding applications in other areas including data structures for dynamic graphs, approximation algorithms, and distributed and parallel computation. We survey the state-of-the-art results; identify general techniques; and highlight some simple algorithms that illustrate basic ideas.
Chellapilla Patvardhan, is a Professor in Electrical Engineering Department with more than 22 years of experience in teaching and research. He is currently the Coordinator of the MTech Programme in Faculty of Engineering. He teaches Data Structures and C Programming, Design and Analysis of Algorithms, Image Processing, Computer Architecture, Computer Networks, Operating Systems, Quantum Computing etc. His research interests include Quantum and Quantum-inspired Algorithms, Image Processing, Intelligent Information Processing, Evolutionary Algorithms etc.
He has guided 5 PhDs and several PG Dissertations. He is an author / co-author of more than 250 research papers, one book and a co-editor of two Conference Proceedings. He has completed several R&D projects. He has been a training Consultant to Cadence Design Systems (2003 – 08) and Atrenta Communications.
He is also currently providing Consultancy to Cadence Design Systems on Advanced Algorithms. He has an ongoing collaboration with Kiel University, Germany and was invited for collaborative research work to Kiel in 2001, 2008, 2009 and 2010. He was invited to participate in the Indo-German Round Table on Discrete Algorithms in Bonn in June, 2010.
He also visited INRIA, Sophia Antipolis, France and gave two talks in June, 2010. Professor Patvardhan has won more than 20 Best Presentation / Best Paper awards.
DEI, Agra, INDIA
Heuristics and meta-heuristics for the Bounded Diameter Minimum Spanning Tree (BDMST) Problem
The Bounded Diameter Minimum Spanning Tree (BDMST) Problem is NP-hard and hard to approximate. Exact algorithms in the literature solve problems with no more than about 60 vertices. State-of-the-art heuristics and meta-heuristics in the literature are able to solve larger instances; results are reported in the literature for Euclidean and random weight graph instances having up to 1000 vertices and 499,500 edges. Extant heuristics for the problem require at least O(|V|3) time in order to return a low cost bounded diameter spanning tree.
We present several novel heuristics and meta-heuristics for the problem. A heuristic named the Center-Based Least Sum-of-Costs (CBLSoC) heuristic is given for the BDMST problem. This heuristic takes O(|V|3) time and obtains superior results in comparison to the extant heuristics on Euclidean instances when the diameter constraint is not very small. A simpler, faster version of this heuristic, called CBLSoC-lite is proposed, which runs in O(|V|2) time and obtains results competitive with those of the best known extant heuristics. This heuristic can be applied for solving much larger problem sizes than attempted in the literature.
Two novel heuristics that obtain low cost trees by quickly searching for and constructing an effective backbone of spanning tree nodes are also presented. These heuristics, called the QCH-Greedy and QCH-LSoC heuristics obtain extremely good performance on Euclidean graphs. Further, both heuristics take O(|𝑉|2√|𝑉|) time and scale better with large problem sizes than the extant heuristics. A parallel version of the CBLSOC heuristic, called CBLSoC-P is presented along with parallel versions of two extant algorithms.
Several computational experiments are performed, and the proposed and extant heuristics are compared on a wide range of Euclidean and random weight benchmark problems of different sizes, densities and diameter constraints.
Kavitha Telikepalli is a member of the faculty of School of Technology & Computer Science at
Tata Institute of Fundamental Research, Mumbai.
Kavitha's primary research interests are in graph algorithms and combinatorial optimization.
Kavitha has held the following academic Positions:
Associate Professor, TIFR Mumbai [July 2011 onwards],
Reader, TIFR Mumbai [Feb 2010 -- June 2011],
Assistant Professor, IISc Bangalore [Jan 2005 -- Jan 2010],
Post-doc, MPI für Informatik, Saarbrücken [Sep 2002 -- Dec 2004]
She obtained Ph.D. in Computer Science from TIFR Mumbai (1996--2002) and
B.Tech. in Comp. Sci. & Engg, IIT Madras (1991--1995).
TIFR, Mumbai, India
Popular Matchings: A Tale of Two Classes
We consider popular matching problems in a stable marriage instance G.
A matching M is popular if it does not lose a head-to-head election
against any matching, where each vertex casts a vote for the matching
where it gets a better assignment. Popular matchings always exist in G
since every stable matching is popular. Algorithmic questions for
several popular matching problems have been studied in the last few
years: these include finding a max-size popular matching, finding a
popular matching with our favorite edge(s) in it, finding a max-weight
popular (mixed) matching when there are edge weights. We will see
efficient algorithms for some of these problems.
Interestingly, all tractability results in popular matchings seem to
reduce the popular matching problem to an appropriate question in
either stable matchings or a new subclass of matchings that we will
call "dominant matchings". Dominant matchings always exist in G and
form a subclass of max-size popular matchings while stable matchings
form a subclass of min-size popular matchings. We recently showed that
deciding if G admits a popular matching that is neither stable nor
dominant is NP-hard. Thus this problem exhibits an anomaly that the
min-size and max-size versions are easy, however to decide if there is
anything in between is NP-hard.
[Joint work with Agnes Cseh, Yuri Faenza, Chien-Chung Huang, Vladlena
Powers, and Xingyu Zhang.]
Students Presentations (Feb 18, 2019)
- Angriman, Eugenio (HU Berlin): Research Overview
- Bingmann, Timo (Karlsruhe): Thrill : Pipelined External Memory Processing in
the Cloud with C++
- Gottesbüren, Lars(Karlsruhe): Hypergraph Partitioning
- Hespe, Demian (Karlsruhe): Practical Kernelization Techniques for the
Maximum Cut Problem
- Hübschle-Schneider, Lorenz (Karlsruhe): Communication Efficient Algorithms
- Klaus, Julien (FSU, Jena): Visualization Support for Developing a Matrix
- Penschuck, Manuel (GU Frankfurt): Generating Hyperbolic Random Graphs
- Tran, Khai van (TU Berlin): Quickest Transshipments, Black Boxes, and
- Wedemeyer, Axel (CAU, Uni Kiel): Algorithms for Genome Assembly
- Schnelle, Niklas; Florian Kramer; Johannes Kalmbach (Uni Freiburg):
Live Demo of QLever: A Query Engine for Efficient SPARQL+Text Search
Students Presentations (Feb 19, 2019)
- Manas Jyoti Kashyop (CSE, IIT Madras): Fully Dynamic Maximal Matching without 3 length augmenting paths in O(√n) expected amortized update time
- Apoorv Garg (CSE, IIT Bombay) : Scheduling Trains on a Unidirectional Line Complexity and Algorithms
- K Sandesh Kamath (Chennai Mathematical Institute): Neural Networks and Adversarial Robustness
- Tirtharaj Dash (BITS Pilani, Goa Campus): Learning in Presence of Expert Knowledge (A talk on Inductive Logic Programming)
- Vishwambhar Pathak (Asst. Prof., Dept of CSE BIT MESRA Jaipur Campus): Scalable Machine Perception - A Case of Big Data
- Sparsa Roychowdhury (CSE, IIT Bombay): Analyzing Reachability of Timed Systems
- Ankita Tiwari (Dept. of EEE, IIT Guwahati): Big Data Analytics and its Impact in E-Commerce
- Diksha Chaudhary (Dept of Computational & Data Sciences Center for Brain Research, Indian Institute of Science, Bangalore): Big Data & Genomics
- Amit Kumar Singh (Dept. of EEE, IIT Guwahati): Design of Artificial Intelligence enabled Parallel and Scalable Very Efficient Device Analyzer (VEDA)