Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Midterm Paper - Data Mining and Text Mining | CS 583, Exams of Computer Science

Material Type: Exam; Professor: Liu; Class: Data Mining and Text Mining; Subject: Computer Science; University: University of Illinois - Chicago; Term: Unknown 1989;

Typology: Exams

Pre 2010

Uploaded on 07/23/2009

koofers-user-2yc
koofers-user-2yc 🇺🇸

10 documents

1 / 4

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
1. (a). Name three classification techniques. No need to explain how they work.
(b) (3%) How do you describe overfitting in classification?
(c) (3%) Given the following decision tree, generate all the rules from the tree. Note that
we have two classes, Yes and No.
(d) List three objective interestingness measures of rules, and list two subjective
interestingness measures of rules. No need to explain.
(e) (5) To build a naïve Bayesian classifier, we can make use of association rule mining.
How to compute P(Ai = aj | C= ck) from association rules, where Ai is an attribute and aj
is a value of Ai, and ck is a class value of the class attribute C?
2. (10%) Given the following table with three attributes, a1, a2, and a3:
a1 a2 a3
C B H
B F S
A F F
CBH
B F G
B E O
We want to mine all the large (or frequent) itemsets in the data. Assume the minimum
support is 30%. Following the Apriori algorithm, give the set of large itemsets in L 1, L2, ….,
and candidate itemsets in C2, C3, …. (after the join step and the prune step). What additional
pruning can be done in candidate generation and how?
Age
Sex
income
job
Yes
>= 40 < 40
MF>=50k <50k
Yes No
No Yes
y
n
pf3
pf4

Partial preview of the text

Download Midterm Paper - Data Mining and Text Mining | CS 583 and more Exams Computer Science in PDF only on Docsity!

  1. (a). Name three classification techniques. No need to explain how they work.

(b) (3%) How do you describe overfitting in classification?

(c) (3%) Given the following decision tree, generate all the rules from the tree. Note that

we have two classes, Yes and No.

(d) List three objective interestingness measures of rules, and list two subjective

interestingness measures of rules. No need to explain.

(e) (5) To build a naïve Bayesian classifier, we can make use of association rule mining.

How to compute P(A i

= a j

| C= c k

) from association rules, where A i

is an attribute and a j

is a value of A i

, and c k

is a class value of the class attribute C?

  1. (10%) Given the following table with three attributes, a1, a2, and a3:

a1 a2 a

C B H

B F S

A F F

C B H

B F G

B E O

We want to mine all the large (or frequent) itemsets in the data. Assume the minimum

support is 30%. Following the Apriori algorithm, give the set of large itemsets in L 1

, L

2

and candidate itemsets in C 2

, C

3

, …. (after the join step and the prune step). What additional

pruning can be done in candidate generation and how?

Age

Sex income

job

Yes

= 40 < 40

M F

=50k

<50k

Yes

No

No Yes

y n

  1. (10%) In the multiple minimum support association rule mining, we can assign a minimum

support to each item, called minimum item support (MIS). We define that an itemset, {item1,

item2, …}, is large (or frequent ) if its support is greater than or equal to

min(MIS(item1), MIS(item2), …..)

Given the transaction data:

{Beef, Bread}

{Bread, Cloth}

{Bread, Cloth, Milk}

{Cheese, Boots}

{Beef, Bread, Cheese, Shoes}

{Beef, Bread, Cheese, Milk}

{Bread, Milk, Cloth}

If we have the following minimum item support assignments for the items in the transaction

data,

MIS(Milk) = 50%,

MIS(Bread) = 70%

The MIS values for the rest of the items in the data are all 25%.

Following the MSapriori algorithm, give the set of large (or frequent) itemsets in L 1

, L

2

  1. (10%) Given the following training data, which has two attributes A and B, and a class C,

compute all the probability values required to build a naïve bayesian classifier. Ignore

smoothing.

Answer:

P(C = y) = P(C= n) =

P(A=m | C=y) =

P(A=g | C=y) =

P(A=h | C=y) =

P(A=m | C=n) =

P(A=g | C=n) =

P(A=h | C=n) =

P(B=t | C=y) =

P(B=s | C=y) =

P(B=q | C=y) =

P(B=t | C=n) =

P(B=s | C=n) =

P(B=q | C=n) =

  1. Using agglomerative clustering to cluster the following one dimensional data: 1, 2, 4, 6, 9,

11, 20, 23, 27, 30, 34, 100, 120, 130. You are required to draw the cluster tree and write the

value of the cluster center represented by each node next to the node.

A B C

m t y

m s y

g q y

h s y

g q y

g q n

g s n

h t n

h q n

m t n

  1. Given the classification results in the following confusion matrix, compute the classification

accuracy , precision , and recall scores of the positive data.

  1. Given the following table with three attributes, a1, a2, and a3:

a1 a2 a

C B H

B F S

A F F

C B H

B F G

B E O

we want to mine all the large (or frequent) itemsets using the multiple minimum support

technique. If we have the following minimum item support assignments for the items,

MIS(a2=F) = 60%,

The MIS values for the rest of the items in the data are all 30%.

Following the MSapriori algorithm, give the set of large (or frequent) itemsets in L 1

, L

2

and candidate itemsets in C

2

, C

3

, … (after the join step and the prune step)?

Classified as

Correct

Positive Negative

50 10 Positive

5 200 Negative