Home
Invited Speakers
Important Dates
Program
Venue
Registration
Submissions
Contact / Privacy Policy
Supported by
TTIJ.jpg   TTIC.jpg
   AIP.jpg     AIRC.png
osaka-u.jpg
Additional cooperation from
ISM2.jpg tokyotech2.png

Sixth International Workshop on Symbolic-Neural Learning (SNL2022)

July 8-9, 2022
Venue: Toyota Technological Institute, Nagoya, Japan

Workshop Overview

Day 1: July 8, 2022 (afternoon sessions only, starts at 13:00)

Day 2: July 9, 2022 (morning and afternoon sessions)

Program

Notice: The order of speakers on July 8th has been changed due to unavoidable circumstances.

July 8 (Friday)

13:00-13:10Opening:
13:10-14:10 Keynote talk I: Katsushi Ikeuchi (Microsoft)
14:10-15:10 Keynote talk II: *Dan Roth (University of Pennsylvania/AWS AI Labs)
15:10-15:30 Coffee break
15:30-15:55 Invited talk I: Saeed Seddighin (Toyota Technological Institute at Chicago)
15:55-16:20 Invited talk II: *Hitomi Yanaka (University of Tokyo)
16:20-16:45 Invited talk III: Shin-ichi Maeda (Preferred Networks)
16:45-17:10 Invited talk IV: Mayu Otani (Cyberagent)
17:10-17:35 Invited talk V: Ryo Yonetani (Omron SINIC X Corporation)

July 9 (Saturday)

10:00-11:00 Keynote talk III: Sebastian Riedel (Facebook AI Research/UCL)
11:00-11:25 Invited talk VI: Komei Sugiura (Keio University)
11:25-11:50 Invited talk VII: Brian Bullins (Toyota Technological Institute at Chicago)
11:50-13:20 Lunch break
13:20-14:20 Poster session
14:20-14:30 Break
14:30-15:30 Keynote talk IV: *Eduard Hovy (University of Melbourne/CMU)
15:30-15:50 Coffee break
15:50-16:15 Invited talk VIII: Yusuke Sekikawa (Denso IT laboratory)
16:15-16:40 Invited talk IX: Yasuhide Miura (Fuji Film Business Innovation)
16:40-17:05 Invited talk X: Bradly Stadie (Toyota Technological Institute at Chicago)
17:05-17:30 Invited talk XI: Makoto Miwa (Toyota Technological Institute)
17:30-17:40 Closing

(*online)

Poster presentations

(P01) Chihiro Nakatani, Hiroaki Kawashima, Norimichi Ukita, Configuration- and Action-aware Joint Attention Estimation

(P02) Takuma Yoneda, Ge Yang, Matthew R. Walter, Bradly Stadie, Invariance Through Latent Alignment

(P03) Yuki Kondo, Norimichi Ukita, Joint Learning of Blind Super-Resolution and Crack Segmentation for Degraded Images

(P04) Bradly C. Stadie, Lunjun Zhang, Ge Yang, World Model as a Graph: Learning Latent Landmarks for Planning

(P05) Shanshan Liu, Yuji Matsumoto, A simple method for End-to-End Relation Extraction

(P06) Takahiro Maeda, Norimichi Ukita, MotionAug: Augmentation with Physical Correction for Human Motion Prediction

(P07) Takeru Oba, Norimichi Ukita, Future-guided imitation learning for improving recurrent training

(P08) Kohei Makino, Makoto Miwa, Yutaka Sasaki, A sequential edge editor that considers relationships between relations for document-level relation extraction

(P09) Machel Reid, Edison Marrese Taylor, Yutaka Matsuo, Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers

(P10) Cristian Rodriguez-Opazo, Edison Marrese-Taylor, Basura Fernando, Hiroya Takamura, Qi Wu, Stochastic Bucket-wise Feature Sampling For Memory Efficient Moment Localization in Long Videos

(P11) Siti Oryza Khairunnisa, Zhousi Chen, Mamoru Komachi, A Study on Cross-Lingual Transfer for Named Entity Recognition in the Indonesian Language

(P12) Ryuki Ida, Makoto Miwa, and Yutaka Sasaki, Text Classification using a Document Graph with Nodes Initialized with Textual Information

(P13) Zhousi Chen, Mamoru Komachi, Discontinuous Constituency Parsing and Beyond

(P14) Nallappan Gunasekaran, Masaki Asada, Makoto Miwa, Heterogeneous Graph Representation Learning for Predicting Drug-Drug Interactions

(P15) Takashi Wada, Timothy Baldwin, Yuji Matsumoto, Jey Han Lau, Extracting Multi-Sense Word Embeddings from Pre-Trained Language Models For Unsupervised Lexical Substitution

(P16) Mohammad Golam Sohrab, Matiss Rikters, Makoto Miwa, Pre-trained Sequence-to-Sequence models with BERT Non-Autoregressive Autoencoder

(P17) Masaki Asada, Makoto Miwa, Yutaka Sasaki, Recent developments on neural Drug-Drug Interaction extraction from the literature

(P18) Koji Watanabe, Katsumi Inoue, Learning State Transition Rules from Hidden Layers of Restricted Boltzmann Machines

(P19) Kazutoshi Akita, Norimichi Ukita, Context-aware Region-dependent Scale Proposals for Object Detection using Super-Resolution

(P20) Tomoki Tsujimura, Makoto Miwa, Yutaka Sasaki, Concept-Level Relation Extraction over Linked Entities