|   DESCRIPTION    |    SOM    |    DEMONSTRATION    |


Self-Organizing Maps



Self-Organizing Map
Self Organizing Map(SOM) by Teuvo Kohonen provides a data visualization technique which helps to understand high dimensional data by reducing the dimensions of data to a map. SOM also represents clustering concept by grouping similar data together. Therefore it can be said that SOM reduces data dimensions and displays similarities among data.
With SOM, clustering is performed by having several units compete for the current object. Once the data have been entered into the system, the newtwork of artificial neurons is trained by providing information about inputs. The weight vector of the unit is closest to the current object becomes the winning or active unit. During the training stage, the values for the input variables are gradually adjusted in an attempt to preserve neighborhood relationships that exist within the input data set. As it gets closer to the input object, the weights of the winning unit are adjusted as well as its neighbors.

Teuvo Kohonen writes "The SOM is a new, effective software tool for the visualization of high-dimensional data. It converts complex, nonlinear statistical relationsihps between high-dimensional data items into simple geometric relationships on a low-dimensional display. As it thereby compresses information while preserving the most important topological and metric relationships of the primary data items on the display, it may also be thought to produce some kind of abstractions."

Reducing Data Dimensions
Unlike other learning technique in neural networks, training a SOM requires no target vector. A SOM learns to classify the training data without any external supervision.


<from http://www.lohninger.com/helpcsuite/kohonen_network_-_background_information.htm>

Data Similarity
Getting the Best Matching Unit is done by running through all wright vectors and calculating the distance from each weight to the sample vector. The weight with the shortest distance is the winner. There are numerous ways to determine the distance, however, the most commonly used method is the Euclidean Distance and/or Consine Distance.

SOM Algorithm
Each data from data set recognizes themselves by competeting for representation. SOM mapping steps starts from initializing the weight vectors. From there a sample vector is selected randomly and the map of weight vectors is searched to find which weight best represents that sample. Each weight vector has neighboring weights that are close to it. The weight that is chosen is rewarded by being able to become more like that randomly selected sample vector. The neighbors of that weight are also rewarded by being able to become more like the chosen sample vector. From this step the number of neighbors and how much each weight can learn decreases over time. This whole process is repeated a large number of times, usually more than 1000 times.

In sum, learning occurs in several steps and over many iterations. :

  1. Each node's weights are initialized.
  2. A vector is chosen at random from the set of training data.
  3. Every node is examined to calculate which one's weights are most like the input vector. The winning node is commonly known as the Best Matching Unit (BMU).
  4. Then the neighbourhood of the BMU is calculated. The amount of neighbors decreases over time.
  5. The winning weight is rewarded with becoming more like the sample vector. The nighbors also become more like the sample vector. The closer a node is to the BMU, the more its weights get altered and the farther away the neighbor is from the BMU, the less it learns.
  6. Repeat step 2 for N iterations.

Result Interpretation
An example of the result of a Self Organizing Map is shown below.

  If the average distance is high, then the surrounding weights are very different and a dark color is assigned to the location of the weight. If the average distance is low, a lighter color is assigned. The resulting map shows that black is not similar to the white parts because there are lines of black representing no similarity between white parts. Looking at the map it clearly represents that the two not very similar by having black in between. It can be said that the white parts represent different clusters and the black lines represent the division of the clusters.


More Information
Reference Information
1. Self Organizing Maps by Tom Germano
2. Kohonen's Self Organizing Feature Maps
3. Description of Kohonen's Self-Organizing Map by Timo Honkela

For more information on SOM, reference the listed below. :
1. Teuvo Kohonen - the inventor of the SOM.
2. WEBSOM - a new SOM architecture by Khonen's laboratory.
3. Kohonen, T., Self-Organization and Associative Memory, New York : Springer-Verlag, 1988.
4. Kohonen, T., Self-Organizating Maps, New Yrok : Springer-Verlag, 3rd Extended Ed. 2001.


   Jae-Wook Ahn and Sue Yeon Syn IS 2470 : Interactive System Design April 27, 2005