## Spectral Clustering

   2 ----6
/  \    |
1    4   |
\  /  \ |
3      5


 The goal is to find two clusters in this graph using Spectral Clustering on the Laplacian matrix. Compute the Laplacian of this graph. Then compute the second eigen vector of the Laplacian (the one corresponding to the second smallest eigenvalue). 

A = matrix(c(0,1,1,0,0,0,
1,0,0,1,0,1,
1,0,0,1,0,0,
0,1,1,0,1,0,
0,0,0,1,0,1,
0,1,0,0,1,0),nrow=6,ncol=6,byrow=TRUE)
colnames(A) <- c("1","2","3","4","5","6")
rownames(A) <- c("1","2","3","4","5","6")

B = matrix(c(2,0,0,0,0,0,
0,3,0,0,0,0,
0,0,2,0,0,0,
0,0,0,3,0,0,
0,0,0,0,2,0,
0,0,0,0,0,2),nrow=6,ncol=6,byrow=TRUE)
L = B - A
print(L)
e <- eigen(L)



R can be used to get the eigen values and vectors. But Wolfram gives these values.

### Eigenvalues

$\lambda1 = 5\\ \lambda2 = 3\\ \lambda3 = 3\\ \lambda4 = 2\\ \lambda5 = 1\\ \lambda6 = 0\\$

### Eigenvectors

$\begin{pmatrix} 1& -2& -1& 2& -1& 1\\ 0& -1& 1& -1& 0& 1\\ 1& -1& 0& -1& 1& 0\\ 1& 1& -1& -1& -1& 1\\ -1& 0& -1& 0& 1& 1\\ 1& 1& 1& 1& 1& 1\\ \end{pmatrix}$

The second highest eigen value is $\lambda5 = 1\\$

So the 5th row of the eigen vector matrix is

$\begin{pmatrix} -1& 0& -1& 0& 1& 1\\ \end{pmatrix}$

This means that the
1st and 3rd nodes are part of one cluster and 5th and 6th nodes are part of the other cluster. 2nd and 3rd nodes can be part of either cluster.

## DGIM Algorithm

I think I understood the basic Datar-Gionis-Indyk-Motwani Algorithm which is explained in the book “Mining of massive datasets” by Jure Leskovec(Stanford Univ.),Anand Rajaraman(Milliway Labs) and Jeffrey D. Ullman(Stanford Univ.)

I will add more details later but the diagram below explains it. I used Tikz to draw this picture. I will check-in the tikz code to my github and post the link.

Suppose we are using the DGIM algorithm of Section 4.6.2 to estimate the number of 1's in suffixes of a sliding window of length 40. The current timestamp is 100. Note: we are showing timestamps as absolute values, rather than modulo the window size, as DGIM would do. Suppose that at times 101 through 105, 1's appear in the stream. Compute the set of buckets that would exist in the system at time 105. 

This is the general market-basket problem. It is an algorithm to find how many items are frequently found across many shoppers’ baskets based on a threshold. The threshold is a minimum number of occurrences of a particular item. Items that are bought a certain number of times(threshold) are considered frequent.

These items can be singletons or pairs of items(doubletons) and tripletons and so on.

Imagine there are 100 baskets, numbered 1,2,...,100, and 100 items, similarly numbered. Item i is in basket j if and only if i divides j evenly. For example, basket 24 is the set of items {1,2,3,4,6,8,12,24}. Describe all the association rules that have 100% confidence. Which of the following rules has 100% confidence?

A brute-force R approach to solve such a problem. This is a small number of items. In fact such data mining algorithms deal with large quantities of data and a fixed amount of memory. One such algorithm is the A-priori algorithm.

Each of the if loop checks for a condition like this.

 {8,10} -> 20

This checks if item 20 is always found in a basket that has items 8 and 10 or not.

library(Hmisc)
for( i in 1:100){
a <- 1
for( j in 1:100){

if( i %% j == 0 ){
a <- append(a,j)
}
}
#print(paste( i, a ))
if( 8 %in% a &&  10 %in% a && 20 %nin% a ){ //{8,10} -> 20
#print (a)
}
if( 3 %in% a &&  1 %in% a && 6 %in% a && 12 %nin% a ){
print (a)
}
if( 8 %in% a &&  12 %in% a &&  96 %nin% a ){
#print (a)
}
if( 3 %in% a &&  5 %in% a &&  1 %nin% a ){
#print (a)
}
}}

## The Caltech-JPL Summer School on Big Data Analytics

This treasure trove of videos teach many Machine Learning subjects. This is not intended to be a typical Coursera course because there are no deadlines or tests.

There is so much to write about what I learn from these videos but for now these measures to assess the costs and benefits of a classification model are intended as reference.

## Frequent Itemsets

I am reading Chapter 6 on Frequent Itemsets. I hope to understand the A-priori algorithm.

## PageRank

Suppose we compute PageRank with a β of 0.7, and we introduce the additional constraint that the sum of the PageRanks of the three pages must be 3, to handle the problem that otherwise any multiple of a solution will also be a solution. Compute the PageRanks a, b, and c of the three pages A, B, and C, respectively.

$A = \sum_{i\rightarrow{j}} \beta r_i/d_i + (1 - \beta ) e / n$

My R code is this.

M = matrix(c(0,1/2,1/2,0,0,1,0,0,1),ncol=3)
e = matrix(c(1,1,1),ncol=1)
v1 = matrix(c(1,1,1),ncol=1)
v1 = v1 / 3
for( i in 1:5){

v1 =  ((0.7 * M ) %*% v1 ) + (((1 - 0.7 ) * e ) /3 )
}
v1 = v1 * 3

      [,1]
[1,] 0.300
[2,] 0.405
[3,] 2.295