COLIEE-2021 CALL FOR TASK PARTICIPATIONCompetition on Legal Information Extraction/Entailment (COLIEE)COLIEE-2021 Workshop: June, 21st 2021 at 9AM GMTCOLIEE-2022 registration is open!Run in association with the International Conference on Artificial Intelligence and Law (ICAIL 2021)ICAIL and COLIEE will be held online in 2021 due to the covid-19 pandemicICAIL registration is free of chargeCOLIEE registration due: Mar 25, 2021
Please register to ICAIL and COLIEE by June 18 using conftool in order to join the online workshop on June 21. Registration is free of charge. Those who wish to use previous COLIEE data for a trial, please contact rabelo(at)ualberta.ca The schedule of all COLIEE 2021 talks is now available!
COLIEE 2021 Proceedings now available!
Videos from all COLIEE 2021 talks available!
Sponsored by Alberta Machine Intelligence Institute (AMII) University of Alberta National Institute of Informatics (NII) vLex Canada Ross Intelligence Intellicon | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Download Call for ParticipationFive tasks are included in the 2021 competition: Tasks 1 and 2 are about the case law competition, and tasks 3 and 4 are about the statute law competition. Task 1 is a legal case retrieval task, and it involves reading a new case Q, and extracting supporting cases S1, S2, ..., Sn from the provided case law corpus, to support the decision for Q. Task 2 is the legal case entailment task, which involves the identification of a paragraph from existing cases that entails the decision of a new case. As in previous COLIEE competitions, Task 3 is to consider a yes/no legal question Q and retrieve relevant statutes from a database of Japanese civil code statutes; Task 4 is to confirm entailment of a yes/no answer from the retrieved civil code statutes; Task 5 is to confirm entailment of a yes/no question without any retrieved civil code statutes. 1. Tasks Description1.1 (COLIEE Case Law Competition) Task 1:The Legal Case Retrieval TaskThis legal case competition focuses on two aspects of legal information processing related to a database of predominantly Federal Court of Canada case laws, provided by Compass Law.The legal case retrieval task involves reading a new case Q, and extracting supporting cases S1, S2, ... Sn for the decision of Q from the entire case law corpus. Through the document, we will call the supporting cases for the decision of a new case 'noticed cases'. 1.2 (COLIEE Case Law Competition) Task 2: The Legal Case Entailment taskThis task involves the identification of a paragraph from existing cases that entails the decision of a new case.Given a decision Q of a new case and a relevant case R, a specific paragraph that entails the decision Q needs to be identified. We confirmed that the answer paragraph can not be identified merely by information retrieval techniques using some examples. Because the case R is a relevant case to Q, many paragraphs in R can be relevant to Q regardless of entailment. This task requires one to identify a paragraph which entails the decision of Q, so a specific entailment method is required which compares the meaning of each paragraph in R and Q in this task. 1.3 (COLIEE Statute Law Competition) Task 3: The Statute Law Retrieval TaskThe COLIEE statute law competition focuses on two aspects of legal information processing related to answering yes/no questions from Japanese legal bar exams (the relevant data sets have been translated from Japanese to English).Task 3 of the legal question answering task involves reading a legal bar exam question Q, and extracting a subset of Japanese Civil Code Articles S1, S2,..., Sn from the entire Civil Code which are those appropriate for answering the question such that Entails(S1, S2, ..., Sn , Q) or Entails(S1, S2, ..., Sn , not Q). Given a question Q and the entire Civil Code Articles, we have to retrieve the set of "S1, S2, ..., Sn" as the answer of this track. 1.4 (COLIEE Statute Law competition) Task 4: The Legal Textual Entailment Data CorpusTask 4 of the legal textual entailment task involves the identification of an entailment relationship such thatEntails(S1, S2, ..., Sn , Q) or Entails(S1, S2, ..., Sn , not Q). Given a question Q, we have to retrieve relevant articles S1, S2, ..., Sn through phase one, and then we have to determine if the relevant articles entail "Q" or "not Q". The answer of this track is binary: "YES"("Q") or "NO"("not Q"). 1.5 (COLIEE Statute Law competition) Task 5: The Legal Question AnsweringIn Task 5 of the legal question answering task, given a question Q, we have to determine if the relevant articles entail "Q" or "not Q". The answer of this track is binary: "YES"("Q") or "NO"("not Q"). This question answering could be a concatenation of Task 3 and Task 4, but not necessarily so, e.g. using any knowledge source other than the results of Task 3.2. Data Corpus2.1 Case Law Competition Data Corpus(Task 1 and Task 2)COLIEE-2021 data is drawn from an existing collection of predominantly Federal Court of Canada case law.Participants can choose which phase they will apply for, amongst the two sub-tasks as follows: 1) Task 1: legal information retrieval task. Input is an unseen legal case Q, and output should be relevant cases in the given legal case corpus that support the decision of the input case, which are 'noticed cases'. 2) Task 2: Recognizing entailment between a new case and a relevant case. Input is a decision fragment from an unseen case and a relevant case (the full text from the unseen case, with a few pieces suppressed, is also given as input). Output should be a specific paragraph from the relevant case, which entails the given fragment of the unseen case. 2.2 Statute Law Competition Data Corpus(Task 3, Task 4 and Task 5)The corpus of legal questions is drawn from Japanese Legal Bar exams, and all the Japanese Civil Law articles are also provided (file format and access described below).Participants can choose which phase they will apply for, amongst the three sub-tasks as follows: 1) Task 3: Legal Information Retrieval Task. Input is a bar exam 'Yes/No' question and output should be relevant civil law articles. 2) Task 4: Recognizing Entailment between Law Articles and Queries. Input is a bar exam 'Yes/No' question. After retrieving relevant articles using your method, you have to determine 'Yes' or 'No' as the output. 3) Task 5: Question Answering given Queries. Input is a bar exam 'Yes/No' question. using your method, you have to determine 'Yes' or 'No' as the output. 3. Measuring the Competition Results3.1. Measuring the Case Law Competition Results (Tasks 1 and 2)For Tasks one and two, evaluation measure will be precision, recall and F-measure:(the number of retrieved cases(paragraphs) for all queries) , Recall = (the number of correctly retrieved cases(paragraphs) for all queries) (the number of relevant cases(paragraphs) for all queries) , F-measure = (2 x Precision x Recall) (Precision + Recall) In the evaluation of Task 1 and Task 2, We simply use micro-average (evaluation measure is calculated using the results of all queries) rather than macro-average (evaluation measure is calculated for each query and then take average). 3.2. Measuring the Statute Law Competition Results (Tasks 3, 4 and 5)For Task 3, evaluation measure will be precision, recall and F2-measure (since IR process is pre-process to select candidate articles for providing candidates which will be used in the entailment process, we put emphasis on recall), and it is:(the number of retrieved articles for each query) , Recall = average of (the number of correctly retrieved articles for each query) (the number of relevant articles for each query) , F2-measure = (5 x Precision x Recall) (4 x Precision + Recall) In addition to the above evaluation measures, ordinal information retrieval measures such as Mean Average Precision and R-precision can be used for discussing the characteristics of the submission results. In COLIEE 2021, the method used to calculate the final evaluation score of all queries is macro-average (evaluation measure is calculated for each query and their average is used as the final evaluation measure) instead of micro-average (evaluation measure is calculated using results of all queries). For Task 4, the evaluation measure will be accuracy, with respect to whether the yes/no question was correctly confirmed: (the number of all queries) 4. Submission detailsParticipants are required to submit a paper on their method and experimental results. At least one of the authors of an accepted paper has to present the paper at the COLIEE workshop of ICAIL 2021, which will be held online. The papers authored by the competition winners will be included in the main ICAIL 2021 proceedings if COLIEE organizers admit the paper novelty after the review process.Papers should not exceed 10 pages (inclusive of references) in the approved style: the ACM sigconf template (for LaTeX) or the interim template layout.docx (for Word), both available here and converted into a PDF form and submitted to the EasyChair submission webpage. 5. Schedule31 Jan, 2021 Training data release 08 Feb, 2021 Test data release 6. Details of Each Task6.1 Task 1 DetailsOur goal is to explore and evaluate legal document retrieval technologies that are both effective and reliable.The task investigates the performance of systems that search a set of case laws that support the unseen case law. The goal of the task is to return 'noticed cases' in the given collection to a query. We call a case is 'noticed' to a query iff the case is referenced by the query case. In this task, the references are redacted from the query case contents, because our goal is to measure how accurately a machine can capture decision-supporting cases for a given case. A corpus composed of Federal Court of Canada case laws will be provided. The process of executing the new query cases over the existing cases and generating the experimental runs should be entirely automatic. All query and noticed cases will be provided as a pool. In the training data, we will also disclose which are the noticed cases for each query case. In the test data, only the query cases will be given and the task is to predict which cases should be noticed with respect to each of the test query cases. There should be no human intervention at any stage, including modifications to your retrieval system motivated by an inspection of the test queries. You won't have access to the test labels before you submit your runs. At most three runs from each group will be assessed. The submission format and evaluation methods are described below. 6.2 Task 2 DetailsOur goal is to predict the decision of a new case by entailment from previous relevant cases.As a simpler version of predicting a decision, a decision of a new case and a noticed case will be given as a query. Then, your legal textual entailment system identifies which paragraph in the noticed case entails the decision, by comparing the meanings between queries and the paragraphs. The task investigates the performance of systems that identifies a paragraph that entails the decision of an unseen case. Training data consists of triples of a query, a noticed case, and a paragraph number of the noticed case by which the decision of the query is entailed. The process of executing the queries over the noticed cases and generating the experimental runs should be entirely automatic. Test data will include only queries and noticed cases, but no paragraph numbers. There should be no human intervention at any stage, including modifications to your retrieval system motivated by an inspection of the test queries. 'Decision', in this context, does not mean the final decision of a case, but rather a conclusion expressed by the judge which is entailed by one or more particular paragraphs from the noticed case. In our dataset, this information is packaged in a file named 'entailed_fragment.txt'. 6.3 Task 3 DetailsOur goal is to explore and evaluate legal document retrieval technologies that are both effective and reliable.The task investigates the performance of systems that search a static set of civil law articles using previously unseen queries. The goal of the task is to return relevant articles in the collection to a query. We call an article as "Relevant" to a query iff the query sentence can be answered Yes/No, entailed from the meaning of the article. If combining the meanings of more than one article (e.g., "A", "B", and "C") can answer a query sentence, then all the articles ("A", "B", and "C") are considered "Relevant". If a query can be answered by an article "D", and it can be also answered by another article "E" independently, we also consider both of the articles "D" and "E" are "Relevant". This task requires the retrieval of all the articles that are relevant to answering a query. Japanese civil law articles (English translation besides Japanese) will be provided, and training data consists of pairs of a query and relevant articles. The process of executing the queries over the articles and generating the experimental runs should be entirely automatic. Test data will include only queries but no relevant articles. There should be no human intervention at any stage, including modifications to your retrieval system motivated by an inspection of the queries. You should not materially modify your retrieval system between the time you downloaded the queries and the time you submit your runs. At most three runs from each group will be assessed. The submission format and evaluation methods are described below. 6.4 Task 4 DetailsOur goal is to construct Yes/No question answering systems for legal queries, by entailment from the relevant articles.If a 'Yes/No' legal bar exam question is given, your legal information retrieval system retrieves relevant Civil Law articles. Then, the task investigates the performance of systems that answer 'Yes' or 'No' to previously unseen queries by comparing the meanings between queries and your retrieved Civil Law articles. Training data consists of triples of a query, relevant article(s), a correct answer "Y" or "N". Test data will include only queries and relevant articles, but no 'Y/N' label. There should be no human intervention at any stage, including modifications to your retrieval system motivated by an inspection of the queries. You should not materially modify your retrieval system between the time you downloaded the queries and the time you submit your runs. At most three runs for each group should be assessed. The submission format and evaluation methods are described below. 6.5 Task 5 DetailsOur goal is to construct Yes/No question answering systems for legal queries.If a 'Yes/No' legal bar exam question is given, your legal question answering system answers 'Yes' or 'No' to previously unseen queries. Training data consists of triples of a query, relevant article(s), a correct answer "Y" or "N". Test data will include only queries, but no 'Y/N' label, no relevant articles. 7. Corpus StructureThe structure of the test corpora is derived from a general XML representation developed for use in RITEVAL, one of the tasks of the NII Testbeds and Community for Information access Research (NTCIR) project, as described at the following URL:http://sites.google.com/site/ntcir11riteval/ The RITEVAL format was developed for the general sharing of information retrieval on a variety of domains. 7.1 Case Law Competition Corpus Structure - Task 1The corpus is given as a flat list of files containing all query and noticed cases, for both the training and test datasets. The training dataset is described in a json file containing a mapping between the query case and a list of noticed cases, as in the example below:
7.2 Case Law Competition Corpus Structure - Task 2The corpus structure for Task 2 is given below:
task2_training_corpus
+--- 001
+------- base_case.txt
+------- entailed_fragment.txt
+------- paragraphs
+----------- 001.txt
+----------- 002.txt
+----------- ...
+----------- 046.txt
+--- 002
+------- base_case.txt
+------- entailed_fragment.txt
+------- paragraphs
+----------- 001.txt
+----------- 002.txt
+----------- ...
+----------- 211.txt
+--- train_labels.json
For the query case 001, there are 46 paragraphs in the referenced case (among which is the expected answer, 013.txt, as given in the golden labels JSON file shown before). For the query case 002, there are 211 paragraphs in the referenced case, among which are the two which entail the fragment of text for that case (003.txt and 045.txt, as given in the golden labels file).
For the case whose id is "001", the expected answer is "013.txt", meaning the entailed fragment (ie, the decision) in that query can be entailed from the paragraph id 013 in the given noticed case. The decision in the query is not the final decision of the case. This is a decision for a part of the case, and a paragraph that supports this decision should be identified in the given noticed case. The test corpora will not include the JSON file mapping, and the task is to predict which paragraph(s) entail(s) the decision given by the entailed_fragment.txt file in each case. 7.3 Statute Law Competition Corpus Structure (Tasks 3, 4 and 5)The format of the COLIEE competition corpora derived from an NTCIR representation of confirmed relationships between questions and the articles and cases relevant to answering the questions, as in the following example:
For the Tasks 3, 4 and 5, the training data will be the same. The groups who participate only in the Task 3 can disregard the pair label. 8. Competition Results Submission Format8.1. Task 1For Task 1, a submission should consist of a single ASCII text file. Use a single space to separate columns, with three columns per line as follows:000001 000018 univABC 000001 000045 univABC 000001 000130 univABC 000002 000433 univABC . . .where: 1. The first column is the query file name. 2. The second column is the official case number of the retrieved case. 3. The third column is called the "run tag" and should be a unique identifier for the submitting group, i.e., each run should have a different tag that identifies the group. Please restrict run tags to 12 or fewer letters and numbers, with no punctuation. In this example of a submission, you can see that 000001 has multiple relevant articles (000018.txt, 000045.txt and 000130.txt). 8.2. Task 2For Task 2, a submission should consist of a single ASCII text file. Use a single space to separate columns, with three columns per line as follows:001 013 univABC 002 037 univABC 002 002 univABC 003 008 univABC . . .where: 1. The first column is the query id. 2. The second column is the paragraph number which entails the decision. 3. The third column is called the "run tag" and should be a unique identifier for the submitting group, i.e., each run should have a different tag that identifies the group. Please restrict run tags to 12 or fewer letters and numbers, with no punctuation. A query can have multiple entailing paragraph numbers. 8.3. Task 3Submission format in Task 3 is the TREC eval format used in trec_eval program. Use a single space to separate columns, with six columns per line as follows:H21-5-3 Q0 213 1 0.8 univABC Where 1. The first column is the query id. 2. The second column is "iter" for trec_eval and not used in the evaluation. Information of the column will be ignored. But please write Q0 in this column. 3. The third column is the official article number of the retrieved article. 3. The fourth column is the rank of the the retrieved articles. 3. The fifth column is similarity value (float value) of the retrieved articles. 6. The sixth column is called the "run tag" and should be a unique identifier for the submitting group, i.e., each run should have a different tag that identifies the group. Please restrict run tags to 12 or fewer letters and numbers, with no punctuation. Please refer to the README file of the trec_eval.8.1.tar.gz for detailed explanation. Most significant difference between the previous submission format and new one is that it is necessary to provide ranked lists instead of simple answer sets. Maximum numbers of the documents for each query is limited to 100. It is also encouraged to submit ranked list results with 100 candidates for each query. Since such submissions have smaller precision values due to the large numbers of candidates, it may be inappropriate to compare ones with small numbers of candidates. In order to clarify these different types of submissions, please add suffix "-L" for the submission result file (e.g., When univABC is the results for the submission with limited numbers of candidates, please use univABC-L for the submission with large numbers of submission). 8.4. Tasks 4 and 5For Task 4 and Task 5, again a submission should consist of a single ASCII text file. Use as single space to separate columns as follows, with three columns per line as follows:H18-1-2 Y univABC H18-5-A N univABC H19-19-I Y univABC H21-5-3 N univABC . . .where: 1. and 3 as for Phase One, 2. "Y" or "N" indicating whether the Y/N question was confirmed to be true ("Y") by the relevant articles, or confirmed to be false ("N"). 9. Presentation ScheduleNote: Times below are all GMT. First Panel09:00-09:10 - Greetings09:10-09:40 - "Retrieving Legal Cases from a Large-scale Candidate Corpus" - Yixiao Ma (Tsinghua University), Yunqiu Shao (Tsinghua University), Bulou Liu (Tsinghua University), Yiqun Liu (Tsinghua University), Min Zhang (Tsinghua University) and Shaoping Ma (Tsinghua University) (Task 1 Winner) 09:40-10:10 - "BERT-based Ensemble Methods with Data Augmentation for Legal Textual Entailment in COLIEE Statute Law Task" - Masaharu Yoshioka (Hokkaido University), Yasuhiro Aoki (Hokkaido University) and Youta Suzuki (Hokkaido University) (Task 4 Winner) 10:10-10:30 - "BERT-based Ensemble Methods for Information Retrieval and Legal Textual Entailment in COLIEE Statute Law Task" - Masaharu Yoshioka (Hokkaido University), Youta Suzuki (Hokkaido University) and Yasuhiro Aoki (Hokkaido University) 10:30-11:00 - "ParaLaw Nets - Cross-lingual Sentence-level Pretraining for Legal Text Processing" - Ha-Thanh Nguyen (Japan Advanced Institute of Science and Technology), Vu Tran (Japan Advanced Institute of Science and Technology), Nguyen Le Minh (Japan Advanced Institute of Science and Technology), Minh-Phuong Nguyen (Japan Advanced Institute of Science and Technology), Thi-Hai-Yen Vuong (VNU University of Engineering and Technology), Minh Quan Bui (Japan Advanced Institute of Science and Technology (JAIST)), Minh-Chau Nguyen (Japan Advanced Institute of Science and Technology), Binh Dang (Japan Advanced Institute of Science and Technology) and Ken Satoh (National Institute of Informatics) (Task 5 Winner) 11:00-11:20 - "JNLP Team: Deep Learning Approaches for Legal Processing Tasks in COLIEE 2021" - Ha-Thanh Nguyen (Japan Advanced Institute of Science and Technology), Minh-Phuong Nguyen (Japan Advanced Institute of Science and Technology), Thi-Hai-Yen Vuong (VNU University of Engineering and Technology), Minh Quan Bui (Japan Advanced Institute of Science and Technology (JAIST)), Chau Minh Nguyen (Japan Advanced Institute of Science and Technology), Binh Dang (Japan Advanced Institute of Science and Technology), Vu Tran (Japan Advanced Institute of Science and Technology), Nguyen Le Minh (Graduate School of Information Science, Japan Advanced Institute of Science and Technology) and Ken Satoh (National Institute of Informatics and Sokendai) 11:20-11:40 - "Predicate's Argument Resolver and Entity Abstraction for Legal Question Answering: KIS teams at COLIEE 2021 shared task" - Masaki Fujita (Shizuoka University), Naoki Kiyota (Shizuoka University) and Yoshinobu Kano (Shizuoka University) 11:40-12:00 - "SIAT@COLIEE-2021: Combining Statistics Recall and Semantic Ranking for Legal Case Retrieval and Entailment" - Jieke Li (Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences), Xiaoyan Zhao (Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences), Junhao Liu (Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences), Jiabao Wen (Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences) and Min Yang (Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences) 12:00-12:20 - "DoSSIER@COLIEE 2021: Leveraging dense retrieval and summarization re-ranking for case law retrieval" - Sophia Althammer (TU Vienna), Arian Askari (Leiden University), Suzan Verberne (Leiden University) and Allan Hanbury (TU Vienna) 12:20-13:30 - Lunch Break 13:30-15:30 - ICAIL 2021 conference events Second Panel15:30-16:30 - COLIEE summarization paper - Juliano Rabelo (University of Alberta), Mi-Young Kim (University of Alberta), Randy Goebel (University of Alberta), Yoshinobu Kano (Shizuoka University), Masaharu Yoshioka (Hokkaido University) and Ken Satoh (National Institute of Informatics)16:30-17:00 - "To Tune or Not To Tune? Zero-shot Models for Legal Case Entailment" - Guilherme Moraes Rosa (NeuralMind), Ruan Chaves Rodrigues (NeuralMind), Roberto Lotufo (NeuralMind) and Rodrigo Nogueira (NeuralMind) (Task 2 Winner) 17:00-17:20 - "Yes, BM25 is a Strong Baseline for Legal Case Retrieval" - Guilherme Moraes Rosa (NeuralMind), Ruan Chaves Rodrigues (NeuralMind), Roberto Lotufo (NeuralMind) and Rodrigo Nogueira (NeuralMind) 17:20-17:30 - Break 17:30-18:00 - "Legal Norm Retrieval with Variations of the BERT Model Combined with TF-IDF Vectorization" - Sabine Wehnert (Georg-Eckert-Institute - Leibniz-Institute for international Textbook Research), Viju Sudhi (Otto-von-Guericke-Universität Magdeburg), Shipra Dureja (Otto-von-Guericke-Universität Magdeburg), Libin Kutty (Otto-von-Guericke Universität Magdeburg), Saijal Shahania (Otto-von-Guericke-Universität Magdeburg) and Ernesto William De Luca (Georg-Eckert-Institute - Leibniz-Institute for international Textbook Research) (Task 3 Winner) 18:00-18:20 - "Using Contextual Word Embeddings and Graph Embeddings for Legal Textual Entailment Classification" - Sabine Wehnert (Otto-von-Guericke-Universität Magdeburg), Shipra Dureja (Otto-von- Guericke-Universität Magdeburg), Libin Kutty (Otto-von-Guericke Universität Magdeburg), Viju Sudhi (Otto-von-Guericke-Universität Magdeburg) and Ernesto William De Luca (Georg-Eckert-Institute - Leibniz-Institute for international Textbook Research) 18:20-18:40 - "A Pentapus Grapples with Legal Reasoning" - Frank Schilder (Thomson Reuters), Dhivya Chinnappa (Thomson Reuters), Kanika Madan (Thomson Reuters), Jinane Harmouche (Thomson Reuters), Andrew Vold (Thomson Reuters), Hiroko Bretz (Thomson Reuters) and John Hudzina (Thomson Reuters) 18:40-19:00 - "BM25 and Transformer-based Legal Information Extraction and Entailment" - Mi-Young Kim (University of Alberta), Juliano Rabelo (University of Alberta) and Randy Goebel (University of Alberta) 10. Task winnersThe list of winners is:
11. Application DetailsPotential participants to COLIEE-2021 should respond to this call for participation by submitting an application. To apply, submit the application and memorandums of the following URLs to rabelo(at)ualberta.ca:
Questions and Further Informationrabelo(at)ualberta.caWe will send an acknowledgement to the email address supplied in the form once we have processed the form. COLIEE 2021 ProceedingsThe complete proceedings for the 2021 COLIEE edition are available here.Previous COLIEE editionsCOLIEE 2020. A summary paper on the competition is available.COLIEE 2019. A summary paper on the competition is available. COLIEE 2018. Summary papers on the case law tasks and statute law tasks available. COLIEE 2017 COLIEE 2016 COLIEE 2015 COLIEE 2014 Program CommitteeRandy Goebel, University of Alberta, Canada Yoshinobu Kano, Shizuoka University, Japan Mi-Young Kim, University of Alberta, Canada Maria Navas Loro, Technical University of Madrid, Spain Nguyen Le Minh, JAIST, Japan Juliano Rabelo, University of Alberta, Canada Julien Rossi, University of Amsterdam, The Netherlands Ken Satoh, NII, Japan Jaromir Savelka, University of Pittsburgh, USA Yunqiu Shao, Tsinghua University, China Akira Shimazu, JAIST, Japan Satoshi Tojo, JAIST, Japan Vu Tran, JAIST, Japan Josef Valvoda, Cambridge University, UK Hannes Westermann, University of Montreal, Canada Hiroaki Yamada, Tokyo Institute of Technology, Japan Masaharu Yoshioka, Hokkaido University, Japan Sabine Wehnert, Otto-von-Guericke-Universität Magdeburg, Germany | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Last updated: Jun, 2021 |