COLIEE-2018 CALL FOR TASK PARTICIPATION

Competition on Legal Information Extraction/Entailment (COLIEE)

COLIEE-2018 Workshop: Nov 12-13, 2018 (collocated with JURISIN 2018)
Venue: Raiosha, Hiyoshi Campus in Keio University, Yokohama, Japan

!!NEW!! Photos at COLIEE 2018

COLIEE competition Registration Due: June 30, 2018 July 10, 2018

Those who wish to use previous COLIEE data for a trial, please contact miyoung2(at)ualberta.ca .

Sponsored by
Alberta Machine Intelligence Institute (AMII)
University of Alberta
National Institute of Informatics (NII)
vLex Canada
Ross Intelligence
Intellicon

Download Call for Participation

Accepted papers list

As an associated event of JURISIN 2018, we are happy to announce the 5th Competition on Legal Information Extraction and Entailment (COLIEE-2018), which will include tasks on both a statute law and case law.

Four tasks are included in the 2018 competition: Tasks 1 and 2 are about the case law competition, and tasks 3 and 4 are about the statute law competition. Task 1 is a legal case retrieval task, and it involves reading a new case Q, and extracting supporting cases S1, S2, ..., Sn from the provided case law corpus, to support the decision for Q. Task 2 is the legal case entailment task, which involves the identification of a paragraph from existing cases that entails the decision of a new case. As in previous COLIEE competitions, Task 3 is to consider a yes/no legal question Q and retrieve relevant statutes from a database of Japanese civil code statutes; Task 4 is to confirm entailment of a yes/no answer from the retrieved civil code statutes.

1. Task Description

1.1 (COLIEE Case Law Competition) Task 1:The Legal Case Retrieval Task

This legal case competition focuses on two aspects of legal information processing related to a database of predominantly Federal Court of Canada case laws, provided by Compass Law.

The legal case retrieval task involves reading a new case Q, and extracting supporting cases S1, S2, ... Sn for the decision of Q from the entire case law corpus. Through the document, we will call the supporting cases for the decision of a new case 'noticed cases'.

1.2 (COLIEE Case Law Competition) Task 2: The Legal Case Entailment task

This task involves the identification of a paragraph from existing cases that entails the decision of a new case.

Given a decision Q of a new case and a relevant case R, a specific paragraph that entails the decision Q needs to be identified. We confirmed that the answer paragraph can not be identified merely by information retrieval techniques using some examples. Because the case R is a relevant case to Q, many paragraphs in R can be relevant to Q regardless of entailment.

This task requires one to identify a paragraph which entails the decision of Q, so a specific entailment method is required which compares the meaning of each paragraph in R and Q in this task.

1.3 (COLIEE Statute Law Competition) Task 3: The Statute Law Retrieval Task

The COLIEE statute law competition focuses on two aspects of legal information processing related to answering yes/no questions from Japanese legal bar exams (the relevant data sets have been translated from Japanese to English).

Task 3 of the legal question answering task involves reading a legal bar exam question Q, and extracting a subset of Japanese Civil Code Articles S1, S2,..., Sn from the entire Civil Code which are those appropriate for answering the question such that

Entails(S1, S2, ..., Sn , Q) or Entails(S1, S2, ..., Sn , not Q).

Given a question Q and the entire Civil Code Articles, we have to retrieve the set of "S1, S2, ..., Sn" as the answer of this track.

1.4 (COLIEE Statute Law competition) Task 4: The Legal Question Answering Data Corpus

Task 4 of the legal question answering task involves the identification of an entailment relationship such that

Entails(S1, S2, ..., Sn , Q) or Entails(S1, S2, ..., Sn , not Q).

Given a question Q, we have to retrieve relevant articles S1, S2, ..., Sn through phase one, and then we have to determine if the relevant articles entail "Q" or "not Q". The answer of this track is binary: "YES"("Q") or "NO"("not Q").

2. Data Corpus

2.1 Case Law Competition Data Corpus(Task 1 and Task 2)

COLIEE-2018 data is drawn from an existing collection of predominantly Federal Court of Canada case law.

Participants can choose which phase they will apply for, amongst the two sub-tasks as follows:

1) Task 1: legal information retrieval task. Input is an unseen legal case Q, and output should be relevant cases in the given legal case corpus that support the decision of the input case, which are 'noticed cases'.

2) Task 2: Recognizing entailment between the decision of a new case and a relevant case. Input is a decision paragraph from an unseen case and a relevant case. Output should be a specific paragraph from the relevant case, which entails the decision of the unseen case.

2.2 Statute Law Competition Data Corpus(Task 3 and Task 4)

The corpus of legal questions is drawn from Japanese Legal Bar exams, and all the Japanese Civil Law articles are also provided (file format and access described below).

Participants can choose which phase they will apply for, amongst the two sub-tasks as follows:

1) Task 3: Legal Information Retrieval Task. Input is a bar exam 'Yes/No' question and output should be relevant civil law articles.

2) Task 4: Recognizing Entailment between Law Articles and Queries. Input is a bar exam 'Yes/No' question. After retrieving relevant articles using your method, you have to determine 'Yes' or 'No' as the output.

3. Measuring the Competition Results

3.1. Measuring the Case Law Competition Results (Tasks 1 and 2)

For Tasks one and two, evaluation measure will be precision, recall and F-measure:

Precision =    (the number of correctly retrieved cases(paragraphs) for all queries)
                               (the number of retrieved cases(paragraphs) for all queries) ,

Recall =       (the number of correctly retrieved cases(paragraphs) for all queries)
                                 (the number of relevant cases(paragraphs) for all queries) ,

F-measure =    (2 x Precision x Recall)
                  (Precision + Recall)



In the evaluation of Task 1 and Task 2, We simply use micro-average (evaluation measure is calculated using the results of all queries) rather than macro-average (evaluation measure is calculated for each query and then take average).

3.2. Measuring the Statute Law Competition Results (Tasks 3 and 4)

For Task 3, evaluation measure will be precision, recall and F2-measure (since IR process is pre-process to select candidate articles for providing candidates which will be used in the entailment process, we put emphasis on recall), and it is:

Precision =    average of (the number of correctly retrieved articles for each query)
                              (the number of retrieved articles for each query) ,

Recall =       average of (the number of correctly retrieved articles for each query)
                               (the number of relevant articles for each query) ,

F2-measure =    (5 x Precision x Recall)
                            (4 x Precision + Recall)



In addition to the above evaluation measures, ordinal information retrieval measures such as Mean Average Precision and R-precision can be used for discussing the characteristics of the submission results.

Another difference from previous COLIEE is a method to calculate the final evaluation score of all queries. In COLIEE 2018, we use macro-average (evaluation measure is calculated for each query and average of them are used as the final evaluation measure) instead of micro-average (evaluation measure is calculated using results of all queries).

For Task 4, the evaluation measure will be accuracy, with respect to whether the yes/no question was correctly confirmed:

Accuracy = (the number of queries which were correctly confirmed as true or false)
(the number of all queries)

4. Submission details

We require participants to submit a paper on their method and experimental results to JURISIN 2018 workshop in accordance with an instruction specified at http://research.nii.ac.jp/jurisin2018/ and to present the paper at a special session of JURISIN 2018.

These papers will be reviewed in the same way of usual submission to JURISIN 2018 and extended versions could be submitted to an LNAI post-proceeding of JSAI-IsAI2018 (the name of symposium in which JURISIN 2018 is included). Selected papers will be published in the LNAI post-proceedings.

5. Schedules


   Jun 30, 2018 Jul 10, 2018   Task Registration Due.
Jun 30, 2018 Dry run data release.
Jul 31, 2018 Formal run data release.
Aug 15, 2018 Aug 25, 2018 Formal run submission due.
Aug 21, 2018 Aug 30, 2018 Notification of Evaluation results and Release of gold standard answers.
Aug 30, 2018 Sep 14, 2018 Paper submission due to JURISIN 2018.
Sep 30, 2018 Oct 7, 2018 (AOE) Notification of acceptance (by JURISIN).
Oct 10, 2018 Oct 17, 2018(AOE) Camera ready papers due (to JURISIN).
Nov 12-13, 2018 COLIEE 2018 Workshop.

6. Details of Each Task

6.1 Task 1 Details

Our goal is to explore and evaluate legal document retrieval technologies that are both effective and reliable.

The task investigates the performance of systems that search a set of case laws that support the unseen case law. The goal of the task is to return 'lnoticed cases in the given collection to a query. We call a case is 'noticed' to a query iff the case supports the decision of the query case. In this task, the query case does not include the decision, because our goal is how accurately a machine can capture decision-supporting cases for a new case (with no decision yet).

The corpus composed of Federal Court of Canada case laws will be provided. The process of executing the new query cases over the existing cases and generating the experimental runs should be entirely automatic. In training data, each query case will be given the pool of case laws, and the noticed cases in the pool are shown. In test data only query cases and a pool of case laws will be included with no noticed case informration.

There should be no human intervention at any stage, including modifications to your retrieval system motivated by an inspection of the test queries. You should not peek the test data before you submit your runs.

At most three runs from each group will be assessed. The submission format and evaluation methods are described below.

6.2 Task 2 Details

Our goal is to predict the decision of a new case by entailment from previous relevant cases.

As a simpler version of predicting a decision, a decision of a new case and a noticed case will be given as a query. Then, your legal textual entailment system identifies which paragraph in the noticed case entails the decision, by comparing the meanings between queries and the paragraphs.

The task investigates the performance of systems that identifies a paragraph that entails the decision of an unseen case.

Training data consists of triples of a query, a noticed case, and a paragraph number of the noticed case by which the decision of the query is entailed. The process of executing the queries over the noticed cases and generating the experimental runs should be entirely automatic. Test data will include only queries and noticed cases, but no paragraph numbers.

There should be no human intervention at any stage, including modifications to your retrieval system motivated by an inspection of the test queries.

6.3 Task 3 Details

Our goal is to explore and evaluate legal document retrieval technologies that are both effective and reliable.

The task investigates the performance of systems that search a static set of civil law articles using previously unseen queries. The goal of the task is to return relevant articles in the collection to a query. We call an article as "Relevant" to a query iff the query sentence can be answered Yes/No, entailed from the meaning of the article. If combining the meanings of more than one article (e.g., "A", "B", and "C") can answer a query sentence, then all the articles ("A", "B", and "C") are considered "Relevant". If a query can be answered by an article "D", and it can be also answered by another article "E" independently, we also consider both of the articles "D" and "E" are "Relevant". This task requires the retrieval of all the articles that are relevant to answering a query.

Japanese civil law articles (English translation besides Japanese) will be provided, and training data consists of pairs of a query and relevant articles. The process of executing the queries over the articles and generating the experimental runs should be entirely automatic. Test data will include only queries but no relevant articles.

There should be no human intervention at any stage, including modifications to your retrieval system motivated by an inspection of the queries. You should not materially modify your retrieval system between the time you downloaded the queries and the time you submit your runs.

At most three runs from each group will be assessed. The submission format and evaluation methods are described below.

6.4 Task 4 Details

Our goal is to construct Yes/No question answering systems for legal queries, by entailment from the relevant articles.

If a 'Yes/No' legal bar exam question is given, your legal information retrieval system retrieves relevant Civil Law articles. Then, the task investigates the performance of systems that answer 'Yes' or 'No' to previously unseen queries by comparing the meanings between queries and your retrieved Civil Law articles. Training data consists of triples of a query, relevant article(s), a correct answer "Y" or "N". Test data will include only queries, but no 'Y/N' label, no relevant articles.

There should be no human intervention at any stage, including modifications to your retrieval system motivated by an inspection of the queries. You should not materially modify your retrieval system between the time you downloaded the queries and the time you submit your runs.

At most three runs for each group should be assessed. The submission format and evaluation methods are described below.

7. Corpus Structure

The structure of the test corpora is derived from a general XML representation developed for use in RITEVAL, one of the tasks of the NII Testbeds and Community for Information access Research (NTCIR) project, as described at the following URL:

http://sites.google.com/site/ntcir11riteval/

The RITEVAL format was developed for the general sharing of information retrieval on a variety of domains.

7.1 Case Law Competition Corpus Structure (Tasks 1 and 2)

The format of the COLIEE competition corpora derived from an NTCIR representation of confirmed relationships between questions and the cases, as in the following example:

<pair id="t1-1">
<query content_type="summary" description="The summary of the case created by human expert.">
The parties to this consolidated litigation over the drug at issue brought reciprocal motions, seeking that the opposing party be compelled to provide a further and better affidavit of documents...(omitted)
</query>
<query content_type="fact" description="The facts of the case created by human expert.">
[1] Tabib, Prothonotary: The Rules relating to affidavits of documents should be well known by litigants. Yet it seems that parties are either not following them strictly, or are assuming that others are not...(omitted)
</query>
<cases_noticed description="The corresponding case id in the candidate cases">
18,45,130
</cases_noticed>
<candidiate_cases description="The candidate cases indexed by id">
<candidate_case id="0"> Case cited by: 2 cases Charest v. Can. (1993)....(omitted)
</candidate_case>
<candidate_case id="1"> Case cited by: one case Chehade, Re (1994), 83 F.T.R. 154 (TD) ....(omitted)
</candidate_case>
...(omitted)
<candidate_case id="199"> Desjardins v. Can. (A.G.) (2004), 260 F.T.R. 248 (FC) MLB headnote ....(omitted)
</candidate_case>
</candidate_cases> </pair>

The full xml information is here.

The above is an example of Task 1 training data where query id "t1-1" has 3 (IDs: 18, 45, 130) noticed cases out of 199 candidate cases. The test corpora will not include <cases_noticed> tag information. Out of the given candidate cases for each query, you will be required to retreive noticed cases.

<pair id="t2-1">
<query>
<case_description content_type="summary" description="The summary of the case created by human expert.">
The applicant owned and operated the Inn on the Park Hotel and the Holiday Inn in Toronto...(omitted)
</case_description>
<case_description content_type="fact" description="The facts of the case created by human expert.">
... </case_description>
<decision description="The decision of the query case."> The applicant submits that it is unreasonable to require the applicant to produce the information and documentation referred to in the domestic Requirement Letter within 62 days...(omitted)
</decision>
<cases_noticed description="The supporting case of the basic case">
<paragraph paragraph_id="1">
[1] Carruthers, C.J.P.E.I. : This appeal concerns the right of the Minister of National Revenue to request information from an individual pursuant to the provisions of s. 231.2(1) of the Income Tax Act , S.C. 1970-71-72, c. 63. Background
</paragraph>
<paragraph paragraph_id="2">
[2] The appellant, Hubert Pierlot, is the main officer and shareholder of Pierlot Family Farm Ltd. which carries on a farm operation in Green Meadows, Prince Edward Island.
</paragraph>
...(omitted)
<paragraph paragraph_id="26">
[26] I would, therefore, dismiss the appeal. Appeal dismissed. Editor: Steven C. McMinniman/vem [End of document]
</paragraph>
</cases_noticed>
</query>
<entailing_paragraph description="The paragraph id of the entailed case.">13</entailing_paragraph>
</pair>


The full xml information is here.

The above is an example of Task 2 training data, and a decision in the query was entailed from the paragraph No. 13 in the given noticed case. The decision in the query does not mean the whole decision of the case. This is a decision for a part of the case, and a paragraph that supports this decision should be identified in the given noticed case. The test corpora will not include the <entailing_paragraph> that entails the decision, and you are required to identify the paragraph number which entails the query decision.

7.2 Statute Law Competition Corpus Structure (Tasks 3 and 4)

The format of the COLIEE competition corpora derived from an NTCIR representation of confirmed relationships between questions and the articles and cases relevant to answering the questions, as in the following example:

<pair label="Y" id="H18-1-2">
<t1>
(Seller's Warranty in cases of Superficies or Other Rights)Article 566 (1)In cases where the subject matter of the sale is encumbered with for the purpose of a superficies, an emphyteusis, an easement, a right of retention or a pledge, if the buyer does not know the same and cannot achieve the purpose of the contract on account thereof, the buyer may cancel the contract. In such cases, if the contract cannot be cancelled, the buyer may only demand compensation for damages. (2)The provisions of the preceding paragraph shall apply mutatis mutandis in cases where an easement that was referred to as being in existence for the benefit of immovable property that is the subject matter of a sale, does not exist, and in cases where a leasehold is registered with respect to the immovable property.(3)In the cases set forth in the preceding two paragraphs, the cancellation of the contract or claim for damages must be made within one year from the time when the buyer comes to know the facts.
(Seller's Warranty in cases of Mortgage or Other Rights)Article 567(1)If the buyer loses his/her ownership of immovable property that is the object of a sale because of the exercise of an existing statutory lien or mortgage, the buyer may cancel the contract.(2)If the buyer preserves his/her ownership by incurring expenditure for costs, he/she may claim reimbursement of those costs from the seller.(3)In the cases set forth in the preceding two paragraphs, the buyer may claim compensation if he/she suffered loss.
</t1>
<t2>
There is a limitation period on pursuance of warranty if there is restriction due to superficies on the subject matter, but there is no restriction on pursuance of warranty if the seller's rights were revoked due to execution of the mortgage.
</t2>
</pair>

The above is an example where query id "H18-1-2" is confirmed to be answerable from article numbers 566 and 567 (relevant to Task 3). The pair label "Y" in this example means the answer for this query is "Yes", which is entailed from the relevant articles (relevant to Task 4).

For the Tasks 3 and 4, the training data will be the same. The groups who participate only in the Task 3 can disregard the pair label.


8. Competition Results Submission Format

8.1. Task 1

For Task 1, a submission should consist of a single ASCII text file. Use a single space to separate columns, with three columns per line as follows:

t1-1 18 univABC
t1-1 45 univABC
t1-1 130 univABC
t1-2 433 univABC
.
.
.
where:

1. The first column is the query id.
2. The second column is the official case number of the retrieved case.
3. The third column is called the "run tag" and should be a unique identifier for the submitting group, i.e., each run should have a different tag that identifies the group. Please restrict run tags to 12 or fewer letters and numbers, with no punctuation.
In this example of a submission, you can see that t1-1 has multiple relevant articles (18, 45 and 130).

8.2. Task 2

For Task 2, a submission should consist of a single ASCII text file. Use a single space to separate columns, with three columns per line as follows:

t2-1 13 univABC
t2-2 37 univABC
t2-2 2 univABC
t2-3 8 univABC
.
.
.
where:

1. The first column is the query id.
2. The second column is the paragraph number which entails the decision.
3. The third column is called the "run tag" and should be a unique identifier for the submitting group, i.e., each run should have a different tag that identifies the group. Please restrict run tags to 12 or fewer letters and numbers, with no punctuation.
A query can have multiple entailing paragraph numbers.

8.3. Task 3

Submission format in Task 3 is the TREC eval format used in trec_eval program. Use a single space to separate columns, with six columns per line as follows:

H21-5-3 Q0 213 1 0.8 univABC

Where

1. The first column is the query id.
2. The second column is "iter" for trec_eval and not used in the evaluation. Information of the column will be ignored. But please write Q0 in this column.
3. The third column is the official article number of the retrieved article.
3. The fourth column is the rank of the the retrieved articles.
3. The fifth column is similarity value (float value) of the retrieved articles.
6. The sixth column is called the "run tag" and should be a unique identifier for the submitting group, i.e., each run should have a different tag that identifies the group. Please restrict run tags to 12 or fewer letters and numbers, with no punctuation.

Please refer to the README file of the trec_eval.8.1.tar.gz for detailed explanation. Most significant difference between the previous submission format and new one is that it is necessary to provide ranked lists instead of simple answer sets. Maximum numbers of the documents for each query is limited to 100. It is also encouraged to submit ranked list results with 100 candidates for each query. Since such submissions have smaller precision values due to the large numbers of candidates, it may be inappropriate to compare ones with small numbers of candidates. In order to clarify these different types of submissions, please add suffix "-L" for the submission result file (e.g., When univABC is the results for the submission with limited numbers of candidates, please use univABC-L for the submission with large numbers of submission).

8.4. Task 4

For Task 4, again a submission should consist of a single ASCII text file. Use as single space to separate columns as follows, with three columns per line as follows:

H18-1-2 Y univABC
H18-5-A N univABC
H19-19-I Y univABC
H21-5-3 N univABC
.
.
.
where:

1. and 3 as for Phase One,
2. "Y" or "N" indicating whether the Y/N question was confirmed to be true ("Y") by the relevant articles, or confirmed to be false ("N").

Accpeted Papers List

Masaharu Yoshioka and Zihao Song, "HUKB at COLIEE2018 Information Retrieval Task"


Ying Chen, Yilu Zhou, Zhen Lu, Hao Sun and Wenjun Yang, "Legal Information Retrieval by Association Rules"

Wilco Draijer and Suzan Verberne, "Case law retrieval with doc2vec and Elasticsearch"

Juliano Rabelo, Mi-Young Kim, Housam Babiker and Randy Goebel, "Legal Information Extraction and Entailment for Statute Law and Case Law"

Vu Tran, Son Truong Nguyen and Minh Le Nguyen, "JNLP Group: Legal Information Retrieval with Summary and Logical Structure Analysis"

Ryosuke Taniguchi, Reina Hoshino and Yoshinobu Kano, "Legal Question Answering System using FrameNet"

Reina Hoshino, Ryosuke Taniguchi, Naoki Kiyota and Yoshinobu Kano, "Question Answering System for Legal Bar Examination using Predicate Argument Structure"

Moemedi Lefoane, Tshepho Koboyatshwene and Lakshmi Narasimhan, "KNN CLUSTERING APPROACH TO LEGAL PRECEDENCE RETRIEVAL"

Program Chairs

Ken Satoh, ksatoh(at)nii.ac.jp, National Institute of Informatics and Sokendai, Japan
Randy Goebel, rgoebel(at)ualberta.ca, University of Alberta, Canada
Mi-Young Kim, miyoung2(at)ualberta.ca, Department of Computing Science, University of Alberta, Canada
Yoshinobu Kano, kano(at)inf.shizuoka.ac.jp, Shizuoka University, Japan
Masaharu Yoshioka, yoshioka(at)ist.hokudai.ac.jp, Hokkaido Universit, Japan
Juliano Rabelo, rabelo(at)ualberta.ca, University of Alberta, Canada

Questions and Further Information

miyoung2(at)ualberta.ca

Application Details

Potential participants to COLIEE-2018 should respond to this call for participation by submitting an application. To apply, submit the application and memorandums of the following URL to miyoung2(at)ualberta.ca:

Application:
http://www.ualberta.ca/~miyoung2/COLIEE2018/application.pdf
Memorandum for Tasks 1 and/or 2 (Case law competition)
http://www.ualberta.ca/~miyoung2/COLIEE2018/Case_law_competition_memorandum.pdf
Memorandum for Tasks 3 and/or 4 (Statute law competition, English data)
http://www.ualberta.ca/~miyoung2/COLIEE2018/Statute_law_competition_memorandum_EN.pdf
Memorandum for Tasks 3 and/or 4 (Statute law competition, Japanese data)
http://www.ualberta.ca/~miyoung2/COLIEE2018/Statute_law_competition_memorandum_JA.pdf

We will send an acknowledgement to the email address supplied in the form once we have processed the form.

Previous COLIEE

COLIEE 2017
COLIEE 2016
COLIEE 2015
Last updated: Apr., 2018