Lightweight and simple jQuery plugin for displaying pseudo code (similar to several algorithm/pseudocode packages for LaTeX, as for example algo.sty).

Note that thebelow examples use MathJax for displaying LaTeX equations.

1. ### Link the Required Files

First, jQuery needs to be included. In addition, MathJax may be included and initialized. Finally, include jQuery Pseudocode (both JS and CSS):

<!-- Include Twitter Bootstrap and jQuery: -->
<script type="text/javascript" src="js/jquery.min.js"></script>

<script type="text/javascript" src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
tex2jax: {inlineMath: [['$','$'], ['\$','\$']]}
});
</script>

<!-- Include jQuery Pseudocode: -->
<script type="text/javascript" src="js/jquery-pseudocode.js"></script>


jQuery can also be included using a CDN, for example the Google CDN:

<script src="//ajax.googleapis.com/ajax/libs/jquery/2.0.3/jquery.min.js"></script>

2. ### Write Pseudocode

Write your pseudo code - the syntax is similar to python, that is the system is indentation based. Note that keywords and comment styles can be adapted:

<pre>
var N = 6
var j = 4
for i = 1 to N
// A simple if statement:
if i = j
N = N + 1
</pre>


Note: per default, indentation is set to four whitespaces.

3. ### Call the Plugin

In the end, simply call the plugin on your select. You may also specify further options, see the Options tab for further information.

var N = 6
var j = 4
for i = 1 to N
// A simple if statement:
if i = j
N = N + 1

<script type="text/javascript">
$(document).ready(function() {$('#example-getting-started').pseudocode();
});
</script>
<pre>
var N = 6
var j = 4
for i = 1 to N
// A simple if statement:
if i = j
N = N + 1
</pre>

 keywords Allows to add keywords and adjust the color. Note that the below example does not use MathJax. var N = 6 var j = 4 for i = 1 to N // A simple if statement: if i = j N = N + 1  
var N = 6 var j = 4 for i = 1 to N     // A simple if statement:     if i = j         N = N + 1
 comment Allows to add characters used for comments as well as their color. Note that currently, only full line comments are supported - inline comments as for example using /* ... */ in C++, are not supported. var N = 6 var j = 4 for i = 1 to N % Different comment style before if statement: if i = j N = N + 1  
var N = 6 var j = 4 for i = 1 to N     % Different comment style before if statement:     if i = j         N = N + 1
 tab Conrols the number of whitespaces needed for indentation, that is one tab consists of tab whitespaces: var N = 6 var j = 4 for i = 1 to N // A simple if statement: if i = j N = N + 1  
var N = 6 var j = 4 for i = 1 to N   // A simple if statement:   if i = j     N = N + 1


The below example is taken from davidstutz.de and represents the SEEDS superpixel algorithm (van den Bergh et al., 2012):

function SEEDS(
$I$, // Color image.
$w^{(1)} \times h^{(1)}$, // Initial block size.
$L$, // Number of levels.
$Q$ // Histogram size.
)
initialize the block hierarchy and the initial superpixel segmentation
// Initialize histograms for all blocks and superpixels:
for $l = 1$ to $L$
// At level $l = L$ these are the initial superpixels:
for each block $B_i^{(l)}$
initialize histogram $h_{B_i^{(l)}}$
for $l = L - 1$ to $1$
for each block $B_i^{(l)}$
let $S_j$ be the superpixel $B_i^{(l)}$ belongs to
if a neighboring block belongs to a different superpixel $S_k$
if $\cap(h_{B_i^{(l)}}, h_{S_k}) > \cap(h_{B_i^{(l)}}, h_{S_j})$
$S_k := S_k \cup B_i^{(l)}$, $S_j := S_j - B_i^{(l)}$
for $n = 1$ to $W\cdot H$
let $S_j$ be the superpixel $x_n$ belongs to
if a neighboring pixel belongs to a different superpixel $S_k$
if $h_{S_k}(h(x_n)) > h_{S_j}(h(x_n))$
$S_k := S_k \cup \{x_n\}$, $S_j := S_j - \{x_n\}$
return $S$

<script type="text/javascript">
$(document).ready(function() {$('#example-seeds').pseudocode({
keywords: {
'if': '#000066',
'for': '#000066',
'each': '#000066',
'return': '#000066',
'function': '#000066'
}
});
});
</script>
<pre id="example-seeds">
function SEEDS(
$I$, // Color image.
$w^{(1)} \times h^{(1)}$, // Initial block size.
$L$, // Number of levels.
$Q$ // Histogram size.
)
initialize the block hierarchy and the initial superpixel segmentation
// Initialize histograms for all blocks and superpixels:
for $l = 1$ to $L$
for each block $B_i^{(l)}$ // At level $l = L$ these are the initial superpixels.
initialize histogram $h_{B_i^{(l)}}$
for $l = L - 1$ to $1$
for each block $B_i^{(l)}$
let $S_j$ be the superpixel $B_i^{(l)}$ belongs to
if a neighboring block belongs to a different superpixel $S_k$
if $\cap(h_{B_i^{(l)}}, h_{S_k}) > \cap(h_{B_i^{(l)}}, h_{S_j})$
$S_k := S_k \cup B_i^{(l)}$, $S_j := S_j - B_i^{(l)}$
for $n = 1$ to $W\cdot H$
let $S_j$ be the superpixel $x_n$ belongs to
if a neighboring pixel belongs to a different superpixel $S_k$
if $h_{S_k}(h(x_n)) > h_{S_j}(h(x_n))$
$S_k := S_k \cup \{x_n\}$, $S_j := S_j - \{x_n\}$
return $S$
</pre>


The below exaple is also taken from davidstutz.de. The Turbopixels superpixel algorithm was introduced by Levinshtein et al. in 2009:

function turbopixels(
$I$, // Color image.
$K$, // Number of superpixels.
)
place initial superpixel centers on a regular grid
initialize $\psi^{(0)}$
repeat
compute $v_I$ and $v_B$
evolve the contour by computing $\psi^{(T+1)}$
update assigned pixels
$T := T + 1$
until all pixels are assigned
derive superpixel segmentation $S$ from $\psi$
return $S$

<script type="text/javascript">
$(document).ready(function() {$('#example-turbopixels').pseudocode({
keywords: {
'if': '#000066',
'for': '#000066',
'repeat': '#000066',
'until': '#000066',
'return': '#000066',
'function': '#000066'
}
});
});
</script>
<pre id="example-turbopixels">
function turbopixels(
$I$, // Color image.
$K$, // Number of superpixels.
)
place initial superpixel centers on a regular grid
initialize $\psi^{(0)}$
repeat
compute $v_I$ and $v_B$
evolve the contour by computing $\psi^{(T+1)}$
update assigned pixels
$T := T + 1$
until all pixels are assigned
derive superpixel segmentation $S$ from $\psi$
return $S$
</pre>


The below example represents the Quick Shift algorithm introduced by Vedaldi and Soatto in 2008 (also taken from davidstutz.de):

function quickshift(
$I$ // Color image.
)
for $n = 1$ to $W\cdot H$
initialize $t(x_n) = 0$
for $n = 1$ to $W\cdot H$
// $N_R(x_n)$ is the set of all pixels in the neighborhood of size $N$ around $x_n$.
calculate $p(x_n) = \sum_{x_m \in N_R(x_n)} \exp\left(\frac{-d(x_n, x_m)}{(2/3)R}\right)$
for $n = 1$ to $W\cdot H$
set $t(x_n) = \arg \max_{x_m \in N_R(x_n):p(x_m) > p(x_n)}\{p(x_m)\}$
// $t$ can be interpreted as forest, where all pixels $x_n$ with $t(x_n) = 0$ are roots.
derive superpixel segmentation $S$ from $t$
return $S$

<script type="text/javascript">
$(document).ready(function() {$('#example-quickshift').pseudocode({
keywords: {
'if': '#000066',
'for': '#000066',
'function': '#000066',
'return': '#000066'
}
});
});
</script>
<pre id="example-quickshift">
function quickshift(
$I$ // Color image.
)
for $n = 1$ to $W\cdot H$
initialize $t(x_n) = 0$
for $n = 1$ to $W\cdot H$
// $N_R(x_n)$ is the set of all pixels in the neighborhood of size $N$ around $x_n$.
calculate $p(x_n) = \sum_{x_m \in N_R(x_n)} \exp\left(\frac{-d(x_n, x_m)}{(2/3)R}\right)$
for $n = 1$ to $W\cdot H$
set $t(x_n) = \arg \max_{x_m \in N_R(x_n):p(x_m) > p(x_n)}\{p(x_m)\}$
// $t$ can be interpreted as forest, where all pixels $x_n$ with $t(x_n) = 0$ are roots.
derive superpixel segmentation $S$ from $t$
return $S$
</pre>


The following examples, also taken from davidstutz.de, presents the ERS - Entropy Rate Superpixels - algorithm (Lui et al., 2011):

function ers(
$G = (V,E)$ // Undirected weighted graph.
)
initialize $M = \emptyset$
for each edge $(n,m) \in E$
// Let $\hat{G}$ denote the graph $(V, M \cup \{(n,m)\})$:
let $(n,m)$ be the edge yielding the largest gain in the energy $E(\hat{G})$
if $\hat{G}$ contains $K$ connected components or less:
$M := M \cup \{(n,m)\}$.
derive superpixel segmentation $S$ from $\hat{G}$
return $S$

<script type="text/javascript">
$(document).ready(function() {$('#example-ers').pseudocode({
keywords: {
'if': '#000066',
'for': '#000066',
'each': '#000066',
'function': '#000066',
'return': '#000066'
}
});
});
</script>
<pre id="example-ers">
function ers(
$G = (V,E)$ // Undirected weighted graph.
)
initialize $M = \emptyset$
for each edge $(n,m) \in E$
// Let $\hat{G}$ denote the graph $(V, M \cup \{(n,m)\})$:
let $(n,m)$ be the edge yielding the largest gain in the energy $E(\hat{G})$
if $\hat{G}$ contains $K$ connected components or less:
$M := M \cup \{(n,m)\}$.
derive superpixel segmentation $S$ from $\hat{G}$
return $S$
</pre>


The graph based segmentation algorithm proposed by Felzenswalb and Huttenlocher (2004), taken from davidstutz.de:

function fh(
$G = (V,E)$ // Undirected, weighted graph.
)
sort $E$ by increasing edge weight
let $S$ be the initial superpixel segmentation
for $k = 1,\ldots,|E|$
let $(n,m)$ be the $k^{\text{th}}$ edge
if the edge connects different superpixels $S_i,S_j \in S$
if $w_{n,m}$ is sufficiently small compared to $MInt(S_i,S_j)$
merge superpixels $S_i$ and $S_j$
return $S$

<script type="text/javascript">
$(document).ready(function() {$('#example-fh').pseudocode({
keywords: {
'if': '#000066',
'for': '#000066',
'each': '#000066',
'function': '#000066',
'return': '#000066'
}
});
});
</script>
<pre id="example-fh">
function fh(
$G = (V,E)$ // Undirected, weighted graph.
)
sort $E$ by increasing edge weight
let $S$ be the initial superpixel segmentation
for $k = 1,\ldots,|E|$
let $(n,m)$ be the $k^{\text{th}}$ edge
if the edge connects different superpixels $S_i,S_j \in S$
if $w_{n,m}$ is sufficiently small compared to $MInt(S_i,S_j)$
merge superpixels $S_i$ and $S_j$
return $S$
</pre>


Online Baggign proposed by Oza in 2005, taken from davidstutz.de:

function online_bagging(
$(x_n,t_n)$ // Sample.
)
for $m = 1, \ldots, M$
$k \sim Poisson(1)$
for $i = 1,\ldots,k$
train $h_m$ on $x_n$

<script type="text/javascript">
$(document).ready(function() {$('#example-online-bagging').pseudocode({
keywords: {
'if': '#000066',
'for': '#000066',
'function': '#000066',
'return': '#000066'
}
});
});
</script>
<pre id="example-online-bagging">
function online_bagging(
$(x_n,t_n)$ // Sample.
)
for $m = 1, \ldots, M$
$k \sim Poisson(1)$
for $i = 1,\ldots,k$
train $h_m$ on $x_n$
</pre>


Online Boosting proposed by Oza in 2005, taken from davidstutz.de:

function online_boosting(
$(x_n,t_n)$ // Sample.
)
initialize $\lambda = 1$
for $m = 1,\ldots, M$
$k \sim Poisson(\lambda)$
for $i = 1,\ldots,k$
train $h_m$ on $x_n$
if $h_m(x_n) = t_n$
$\lambda_m^{corr} += \lambda$
$\epsilon_m = \lambda_m^{wrong}/(\lambda_m^{corr} + \lambda_M^{wrong}$
$\lambda = \lambda/(2(1 - \epsilon_m))$
else
$\lambda_m^{wrong} += \lambda$
$\epsilon_m = \lambda_m^{wrong}/(\lambda_m^{corr} + \lambda_M^{wrong}$
$\lambda = \lambda/(2\epsilon_m)$

<script type="text/javascript">
$(document).ready(function() {$('#example-online-boosting').pseudocode({
keywords: {
'if': '#000066',
'for': '#000066',
'function': '#000066',
'return': '#000066'
}
});
});
</script>
<pre id="example-online-boosting">
function online_boosting(
$(x_n,t_n)$ // Sample.
)
initialize $\lambda = 1$
for $m = 1,\ldots, M$
$k \sim Poisson(\lambda)$
for $i = 1,\ldots,k$
train $h_m$ on $x_n$
if $h_m(x_n) = t_n$
$\lambda_m^{corr} += \lambda$
$\epsilon_m = \lambda_m^{wrong}/(\lambda_m^{corr} + \lambda_M^{wrong}$
$\lambda = \lambda/(2(1 - \epsilon_m))$
else
$\lambda_m^{wrong} += \lambda$
$\epsilon_m = \lambda_m^{wrong}/(\lambda_m^{corr} + \lambda_M^{wrong}$
$\lambda = \lambda/(2\epsilon_m)$
</pre>