The purpose of the Statistical Relational AI (StarAI)
workshop is to bring together researchers and practitioners from
three fields: logical (or relational)
AI/learning, probabilistic (or statistical)
AI/learning and neural approaches for AI/learning with knowledge graphs and other structured data. These fields share many key features and often solve similar problems and tasks. Until recently, however, research in them has progressed independently with little or no interaction. The fields often use different terminology for the same concepts and, as a result, keeping-up and understanding the results in the other field is cumbersome, thus slowing down research.
Our long term goal is to change this by achieving a synergy
between logical, statistical and neural AI.
As a stepping stone towards realizing this big picture view on AI, we are organizing the
Ninth International Workshop on Statistical Relational AI
at the
34th AAAI Conference on Artificial Intelligence (AAAI)
in New York, on February 7th 2020.
Practical
Format
StarAI will be a one day workshop with short paper presentations, a poster session, and three invited speakers.
- Guy Van den Broeck (UCLA)
- Lise Getoor (UC Santa Cruz)
- Yejin Choi (University of Washington & Allen Institute for AI)
Submissions
Authors should submit either a full paper reporting on novel technical contributions or work in progress (AAAI style, up to 7 pages excluding references), a short position paper (AAAI style, up to 2 pages excluding references), or an already published work (verbatim, no page limit, citing original work) in PDF format via EasyChair. All submitted papers will be carefully peer-reviewed by multiple reviewers and low-quality or off-topic papers will be rejected. Accepted papers will be presented as a short talk and poster.
Important Dates
- Paper Submission: November
15 18, 2019
- Notification of Acceptance: December 6, 2019
- Camera-Ready Papers: January 15, 2020
- Date of Workshop: February 7, 2020
Schedule
Morning
- 8:30 a.m.: Welcome and introduction
- 8:35 a.m.: Invited talk Guy Van den Broeck
- Title: Querying advanced probabilistic models: from relational embeddings to probabilistic programs
Abstract:This talk will go over two recent developments in modeling and querying highly complex probability distributions. First, I will talk about running symbolic queries on relational embedding models, which live in vector space, as if they were probabilistic databases. Surprisingly, this can be done in a way that is sound, exact, accurate, and tractable. Our approach, called TractOR, overcomes some key theoretical limitations of more traditional probabilistic databases, while vastly extending the applicability of relational embedding models to go far beyond link prediction. Second, I will talk about an even richer class of distributions: discrete probabilistic programs. We introduce a novel probabilistic programming language, called Dice. It combines key insights from the AI and programming languages community to open up the black box of probabilistic program inference. This approach results in a highly efficient inference strategy that scales to probabilistic programs with tens of thousands of lines of code.
(slides)
- 9:35 a.m.: Poster Spotlights (90 seconds each)
- 10:10 a.m.: Coffee break & Poster Session (#1-21)
- 11:30 a.m.: Invited talk Lise Getoor
- Title: Probabilistic Soft Logic: A Scalable, Declarative Approach to Structured Prediction from Noisy Data
Abstract: A fundamental challenge in developing impactful artificial intelligence technologies is balancing the ability to model rich, structured domains with the ability to scale to big data. Many important problem areas are both richly structured and large scale, including computational social science problems, knowledge graphs and more. In this talk, I will describe Probabilistic Soft Logic (PSL), a declarative probabilistic programming language that is able to both capture rich structure and scale to big data. The mathematical framework upon which PSL is based, hinge-loss Markov random fields (HL-MRFs), is a new kind of probabilistic graphical model that generalizes three different approaches to inference. I show that all three views lead to the same inference objective, and that this inference objective is convex, leading to highly efficient inference algorithms. These inference algorithms can scale further via the use of smart grounding strategies, lifting and more. Along the way, I’ll describe results in a variety of different domains.
(slides)
- 12:30 p.m.: Lunch break (on your own)
Afternoon
- 2:00 p.m.: Invited talk Yejin Choi
- Title: Neuro-Symbolic Commonsense Intelligence: Cracking the Longstanding Challenge in AI
Abstract: Despite considerable advances in deep learning, AI remains to be narrow and brittle. One fundamental limitation comes from its lack of commonsense intelligence: reasoning about everyday situations and events, which in turn, requires knowledge about how the physical and social world works.
In this talk, I will share some of our recent efforts that attempt to crack commonsense intelligence by neural representation learning of commonsense knowledge and reasoning. First, I will introduce ATOMIC, the atlas of relational commonsense knowledge, organized as a graph of 877k if-then rules (e.g., "if X pays Y a compliment, then Y will likely return the compliment”). Next, I will introduce COMET, our neural commonsense models that can learn from and generalize well beyond the knowledge provided in the symbolic ATOMIC knowledge graph to successfully reason about previously unseen events. Finally, I will present a suite of commonsense AI benchmarks ranging from adductive and counterfactual reasoning to visual commonsense reasoning. I will conclude the talk by discussing major open research questions, including the importance of algorithmic solutions to reduce incidental biases in data that can lead to overestimation of true AI capabilities.
(slides)
- 3:00 p.m.: Poster Spotlights (90 seconds each)
- 3:40 p.m.: Coffee break & Poster Session (#22-46)
- 5:00 p.m.: Panel and wrapup
Accepted Papers
-
Vaishak Belle
Abstracting Probabilistic Models: A Logical Perspective
-
Anton Fuxjaeger and Vaishak Belle
Scaling up Probabilistic Inference in Linear and Non-Linear Hybrid Domains by Leveraging Knowledge Compilation
-
Somak Aditya and Atanu Sinha
Uncovering Relations for Marketing Knowledge Representation
-
Tao Li, Vivek Gupta, Maitrey Mehta and Vivek Srikumar
A Logic-Driven Framework for Consistency of Neural Models
-
Efthymia Tsamoura, Victor Gutierrez-Basulto and Angelika Kimmig
Beyond the Grounding Bottleneck: Datalog Techniques for Inference in Probabilistic Logic Programs
-
Michael Varley and Vaishak Belle
Implementing Fairness with Tractable Probabilistic Models
-
Steven Holtzen, Todd Millstein and Guy Van den Broeck
Generating and Sampling Orbits for Lifted Probabilistic Inference
-
Ying Jin, Weilin Fu, Jian Kang, Jiadong Guo and Jian Guo
Bayesian Symbolic Regression
-
Jonathan Brophy and Daniel Lowd
EGGS: A Flexible Approach to Relational Modeling of Social Network Spam
-
Marcel Gehrke, Ralf Möller and Tanya Braun
Taming Reasoning in Temporal Probabilistic Relational Models
-
Ondřej Kuželka and Yuyi Wang
Domain-Liftability of Relational Marginal Polytopes
-
Yuqiao Chen, Yibo Yang, Sriraam Natarajan and Nicholas Ruozzi
Lifted Hybrid Variational Inference
-
Sima Behpour
Active Learning in Video Tracking
-
Varun Embar, Sriram Srinivasan and Lise Getoor
Estimating Aggregate Properties In Relational Networks With Unobserved Data
-
Bahare Fatemi, Perouz Taslakian, David Vazquez and David Poole
Knowledge Hypergraphs: Prediction Beyond Binary Relations
-
Alexander Hayes
srlearn: A Python Library for Gradient-Boosted Statistical Relational Models
-
Pegah Jandaghi and Jay Pujara
Human-like Time Series Summaries via Trend Utility Estimation
-
Junkang Li, Solene Thepaut and Veronique Ventos
Reducing incompleteness in the game of Bridge using PLP
-
Gaurav Sinha, Ayush Chauhan, Aurghya Maiti, Naman Poddar and Pulkit Goel
Dis-entangling Mixture of Interventions on a Causal Bayesian Network UsingAggregate Observation
-
Bishwamittra Ghosh and Kuldeep S Meel
IMLI: An Incremental Framework for MaxSAT-Based Learning of Interpretable Classification Rules
-
Bassem Makni, Ibrahim Abdelaziz and Jim Hendler
Explainable Deep RDFS Reasoner
-
Shubham Sharma, Subhajit Roy, Kuldeep S. Meel and Mate Soos
GANAK: A Scalable Probabilistic Exact Model Counter
-
Yaqi Xie, Ziwei Xu, Mohan S. Kankanhalli, Kuldeep S. Meel and Harold Soh
Embedding Symbolic Knowledge into Deep Networks
-
Vaishak Belle and Luc De Raedt
Semiring Programming: A Declarative Framework for Generalized Sum Product Problems
-
Vaishak Belle
SMT + ILP
-
Teodora Baluta, Shiqi Shen, Shweta Shinde, Kuldeep S. Meel and Prateek Saxena
Quantitative Verification of Neural Networks and Its Security Applications
-
Yuhao Wang, Vlado Menkovski, Hao Wang, Xin Du and Mykola Pechenizkiy
Causal Discovery from Incomplete Data: A Deep Learning Approach
-
Pedro Zuidberg Dos Martires and Samuel Kolb
Monte Carlo Anti-Differentiation for Approximate Weighted Model Integration
-
Serena Booth, Ankit Shah, Yilun Zhou and Julie Shah
Sampling Prediction-Matching Examples in Neural Networks: A Probabilistic Programming Approach
-
David Arbour, Ryan Rossi, Somdeb Sarkhel and Nesreen Ahmed
Causal Inference for Graph-based Relational Time Series
-
Michael Skinner, Lakshmi Raman, Neel Shah, Abdelaziz Farhat and Sriraam Natarajan
A Preliminary Approach for Learning Relational Policies for the Management of Critically Ill Children
-
Johan Pauwels, Gyorgy Fazekas and Mark B. Sandler
A Critical Look at the Applicability of Markov Logic Networks for Music Signal Analysis
-
Tanya Braun and Ralf Möller
Exploring Unknown Universes in Probabilistic Relational Models
-
Devendra Dhami, Siwen Yan, Gautam Kunapuli and Sriraam Natarajan
Non-Parametric Learning of Gaifman Models
-
Adrian Phoulady, Ole-Christoffer Granmo, Saeed R. Gorji and Hady Ahmady Phoulady
The Weighted Tsetlin Machine: Compressed Representations with Weighted Clauses
-
Navdeep Kaur, Gautam Kunapuli and Sriraam Natarajan
Non-Parameteric Learning of Lifted Restricted Boltzmann Machines
-
Mayukh Das, Nandini Ramanan, Janardhan Rao Doppa and Sriraam Natarajan
One-Shot Induction of Generalized Logical Concepts via Human Guidance
-
Matthew Wai Heng Chung and Hegler Tissot
Evaluating the Effectiveness of Margin Parameter when Learning Knowledge Embedding Representation for Domain-specific Multi-relational Categorized Data
-
Nimar Arora, Nazanin Tehrani, Kinjal Shah, Lily Li, Narges Torabi, Michael Tingley, David Noursi, Sepehr Masouleh, Eric Lippert and Erik Meijer
Newtonian Monte Carlo: single-site MCMC meets second-order gradient methods
-
Tal Friedman and Guy Van den Broeck
Towards a Tractable Relational Embedding Model
-
Saket Dingliwal, Ronak Agarwal, Happy Mittal and Parag Singla
Advances in Symmetry Breaking for SAT Modulo Theories
-
Pablo Robles-Granda and Jennifer Neville
On the Challenges of Representing Joint Cumulative Probabilistic GenerativeFunctions of Attributed-Networks
-
Sebastijan Dumancic, Tias Guns, Wannes Meert and Hendrik Blockeel
Learning Relational Representations with Auto-encoding Logic Programs
-
Timothy van Bremen and Ondrej Kuzelka
Approximate Weighted First-Order Model Counting: Exploiting Fast Approximate Model Counters and Symmetry
-
Ayush Maheshwari, Hrishikesh Patel, Nandan Rathod, Ritesh Kumar, Ganesh Ramakrishnan and Pushpak Bhattacharyya
Tale of tails: Rule-Augmented Sequence Labeling for Event Extractionin Low Resource Languages
-
Stefano Teso
Does Symbolic Knowledge Prevent Adversarial Fooling?
Organization
Organizing Committee
For comments, queries and suggestions, please contact:
- Sebastijan Dumancic (KU Leuven)
- Angelika Kimmig (Cardiff)
- David Poole (UBC)
- Jay Pujara (USC)
Program Committee
- Behrouz Babaki (Polytechnique Montreal)
- Hendrik Blockeel (KU Leuven)
- YooJung Choi (UCLA)
- Jaesik Choi (UNIST)
- Fabio Cozman (University of São Paulo)
- Andrew Cropper (University of Oxford)
- Golnoosh Farnadi (MILA)
- Alberto García-Durán (EPFL)
- Vibhav Gogate (University of Texas at Dallas)
- Mehran Kazemi (Borealis AI)
- Kristian Kersting (TU Darmstadt)
- Ondřej Kuželka (Czech Technical University in Prague)
- Mark Law (Imperial College)
- Robin Manhaeve (KU Leuven)
- Pasquale Minervini (UCL)
- Sriraam Natarajan (The University of Texas at Dallas)
- Aniruddh Nath (Google)
- Scott Sanner (University of Toronto)
- Vítor Santos Costa (University of Porto)
- Oliver Schulte (Simon Fraser University)
- Sammer Singh (University of California, Irvine)
- Lucas Sterckx (Ghent University)
- Stefano Teso (University of Trento)
- Timothy Van Bremen (KU Leuven)
- Guy Van den Broeck (UCLA)
- Vincent Vercruyssen (KU Leuven)
- Antonio Vergari (UCLA)
- Pedro Zuidberg Dos Martires (KU Leuven)
- Rodrigo de Salvo Braz (SRI International)
Topics
StarAI is currently provoking a lot of new research and has tremendous theoretical and practical implications.
Theoretically, combining logic and probability in a unified representation and building general-purpose reasoning tools for it has been the dream of AI, dating back to the late 1980s.
Practically, successful StarAI tools will enable new applications in several large, complex real-world domains including those involving big data, social networks, natural language processing, bioinformatics, the web, robotics and computer vision. Such domains are often characterized by rich relational structure and large amounts of uncertainty. Logic helps to effectively handle the former while probability helps her effectively manage the latter.
We seek to invite researchers in all subfields of AI to attend the workshop and to explore together how to reach the goals imagined by the early AI pioneers.
The focus of the workshop will be on general-purpose representation, reasoning and learning tools for StarAI as well as practical applications. Specifically, the workshop will encourage active participation from researchers in the following communities: satisfiability (SAT), knowledge representation (KR), constraint satisfaction and programming (CP), (inductive) logic programming (LP and ILP), graphical models and probabilistic reasoning (UAI), statistical learning (NeurIPS, ICML, and AISTATS), graph mining (KDD and ECML PKDD) and probabilistic databases (VLDB and SIGMOD).
It will also actively involve researchers from more applied communities, such as natural language processing (ACL and EMNLP), information retrieval (SIGIR, WWW and WSDM), vision (CVPR and ICCV), semantic web (ISWC and ESWC) and robotics (RSS and ICRA).
Links
Previous Workshops
Previous StarAI workshops were held in conjunction with
AAAI 2010, UAI 2012, AAAI 2013, AAAI 2014, UAI 2015, IJCAI 2016,
UAI 2017 and IJCAI 2018 and were among the most popular workshops at the conferences.