Experimental functions
conv
1D or 2D Convolution for input by filter, nice Convolution arithmetic explanation
(conv in filter {pad {stride {dilation}}})
parameter
- in: input vector or matrix (or timeserie of vector or matrix)
- filter: kernel tensor.
- pad: (optional default is 0), padding number, 1 integer or 2 4 integer array
- stride: (optional default is 1), stride number, 1 integer or [int int]
- dilation: (optional default is 1), dilation factor, 1 integer or [int int]
examples
Sobel Horizontal
Sobel operator applied on 1st component (red) of an image.
Shows original tensor, filter kernel and result.
(def
;get 1st (red) component of image
gray (get (import "file://~/http_server/images/tix_color.png") 0)
sobel-x (tensor (shape 3 3) [
-1 0 1
-2 0 2
-1 0 1])
sobel-y (transpose sobel-x)
)
;[
; gray
; sobel-x
(conv gray sobel-x)
;]
Sobel gradient magnitude
Applies horizontal and veritcal Sobel operator on 1st component (red) of an image.
Combined together to find the absolute magnitude of the gradient at each point.
(def
;get 1st (red) component of image
gray (get (import "file://~/http_server/images/tix_color.png") 0)
sobel-x (tensor (shape 3 3) [-1 0 1 -2 0 2 -1 0 1])
sobel-y (transpose sobel-x)
)
;[
; gray
(sqrt
(+
(sqr (conv gray sobel-x))
(sqr (conv gray sobel-y))))
;]
Sobel RGB combined magnitudes
Computes Sobel magnitude on each RGB channel.
Then combines in a 3D tensor (RGB x X x Y) to have a color image.
(def
tix (import "file://~/http_server/images/tix_color.png")
sobel-x (tensor (shape 3 3) [-1 0 1 -2 0 2 -1 0 1])
sobel-y (transpose sobel-x)
)
(defn one-channel[img filter c]
;apply filter on given channel
(conv (get img c) filter)
)
(defn sobel-gradient[img c]
;compute sobel gradient on given channel
(sqrt
(+
(sqr (one-channel img sobel-x c))
(sqr (one-channel img sobel-y c))))
)
;[
; tix
;combine gradient per channel
;this creates an RGB image
(tensor
(sobel-gradient tix 0)
(sobel-gradient tix 1)
(sobel-gradient tix 2))
;]
covariance
Covariance matrix (tensor of size 2) between timeseries. Input timerseries are synchronized, for missing point they are considered piecewise constant.
(covariance ts...)
parameter
- ts: as many timeserie or array of time serie as you want
example
Show covariance matrix between temperatures coming from 10 meteo stations. Latitude must be West (< 0) and timeserie length must be > 1000. Slice is used to compute covariance only for 10 first stations.
WARNING can be slow to compute (>10s): 30 Millions points explored.
(def start 2016-01-01 stop 2018-12-31)
(defn is-usable[code]
(cond
(< (vget (timeserie @"lon" "meteonet" code start stop)) 0) false
(< (len (timeserie @"t" "meteonet" code start stop)) 1000) false
true))
(def stations
(value-as-array (slice (keep (perimeter "meteonet" start) is-usable) [0 9])))
(defn ts[code]
(timeserie @"t" "meteonet" code start stop))
(covariance (map ts stations))
dropout
Dropout randomly replace input value by zero with given probability.
Only called during learning, does nothing if called directly.
(dropout in p)
parameter
- in: input vector or matrix (or timeserie of vector or matrix)
- p: probability to drop input, should be bewteen 0 and 1
import
Import resource from a URI(URL). Still in developpement, only PNG, JPEG and GIF images are imported as a RGBA tensor.
(import uri {checksum})
parameter
- uri: resource uri
- checksum: (optional) ensure resource SHA256 checksum is the expected one
examples
Import Strigi-Form small logo as a RGBA tensor and show it as an image.
URL describes local LispTick home images folder.
(import "file://~/http_server/images/logo_symbole_1x.png")
Same source image from official LispTick, but keep only channel 0 index, the RED.
(get (import "https://lisptick.org/images/logo_symbole_1x.png") 0)
maxpool
1D or 2D MaxPool replaces input by using maximum value in a neighborhood.
Used to reduce input dimensionality.
(maxpool in kernel pad stride)
parameter
- in: input vector or matrix (or timeserie of vector or matrix)
- kernel: kernel size, 1 integer or [int int]
- pad: padding number, 1 integer or 2 4 integer array
- stride: stride number, 1 integer or [int int]
examples
Maximum
Replaces value by maximum in a 2x2 neighborhood.
(def
;get 1st (red) component of image
gray (get (import "file://~/http_server/images/tix_color.png") 0)
)
;[
; gray
(maxpool gray 2)
;]
Reduce size
Reduces input dimensionality by 2 using maximum in a 2x2 neighborhood.
Zero padding, stride 2 so reduction is by 2.
(def
;get 1st (red) component of image
gray (get (import "file://~/http_server/images/tix_color.png") 0)
)
;[
; gray
(maxpool gray 2 0 2)
;]
shape
Shape of tensor, its an array of int representating each dimension size.
See tensor for examples.
(shape arg1 {arg2 {arg3...}})
parameter
- arg1: size of first dimension, or tensor to get shape from
- arg2: size of second dimension
- argn: size of nth dimension
solve
Cost function optimizer by stochastic gradient descent.
Internally LispTick uses Gorgonia package. All optimizer models, their options and default values are mapped to Gorgonia models and default values.
(solve [learn1 ...] [ts1...] cost epochs [model {(option . value)...})
parameter
- [learn1 …]: array of symbol for learnables
- [ts1 …]: array of timeseries used as argument for cost function
- cost: cost function, inputs are [ts1 …] output is a scalar than will be minimized by solver
- epochs: number of epochs to run
- [model {(arg . value)…}] model name and its optional arguements
Available models, their optional arguments and default value
- “adagrad”: AdaGradSolver is the solver that does adaptive gradient descent see paper.
- (“rate” . 0.001) learn rate
- (“eps” . 1e-8) smoothing factor
- (“l1reg” . none) L1 regularization parameter
- (“l2reg” . none) L2 regularization parameter
- (“clip” . none) clip gradient at
- “adam”: Adaptive Moment Estimation solver (basically RMSProp on steroids) see paper.
- (“rate” . 0.001) learn rate
- (“eps” . 1e-8) smoothing factor
- (“beta1” . 0.9) modifier for means
- (“beta2” . 0.999) modifier for variances
- (“l1reg” . none) L1 regularization parameter
- (“l2reg” . none) L2 regularization parameter
- (“clip” . none) clip gradient at
- “barzilaiborwein”: Barzilai-Borwein performs Gradient Descent in steepest descend direction.
- (“rate” . 0.001) learn rate
- (“clip” . none) clip gradient at
- “momentum”: Momentum is the stochastic gradient descent optimizer with momentum item.
- (“rate” . 0.001) learn rate
- (“momentum” . 0.9) momentum
- (“l1reg” . none) L1 regularization parameter
- (“l2reg” . none) L2 regularization parameter
- (“clip” . none) clip gradient at
- “rmsprop”: RMSPropSolver is a solver that implements Geoffrey Hinton’s RMSProp gradient descent optimization algorithm see paper.
- (“rate” . 0.001) learn rate
- (“eps” . 1e-8) smoothing factor
- (“rho” . 0.999) decay rate/rho
- (“l2reg” . none) L2 regularization parameter
- (“clip” . none) clip gradient at
- “vanilla”: VanillaSolver is standard stochastic gradient descent optimizer.
- (“rate” . 0.001) learn rate
- (“l1reg” . none) L1 regularization parameter
- (“l2reg” . none) L2 regularization parameter
- (“clip” . none) clip gradient at
example
This example shows how to describe, lear and use a NN with one hidden layer to learn simple function like cosinus. You can play with it and change hidden layer size, target functions…
Used solver is ADAM, Adaptive Moment Estimation (see paper).
(def
pi 3.14159265359 ;π
hidden 8
size 10000
;randomly initialized weights
w0 (tensor (shape hidden 1) (fn[x] (rand-g 0 1)))
b0 (tensor (shape hidden 1) (fn[x] (rand-g 0 1)))
w1 (tensor (shape 1 hidden) (fn[x] (rand-g 0 1)))
start 2019-01-01
)
(def
;timeserie of size random value between -π and π
ts_angle
(timeserie
(range start (+ (* (- size 1) 1h) start) 1h)
(fn[t] (rand-u (* -1 pi) pi t)))
;target timeserie, simply input cosinus
ts_target (cos ts_angle)
)
;Neural Network transfert function with one hidden layer
(defn transfert[x]
(mat* w1 (sigmoid (+ b0 (mat* w0 x)))))
;cost function, square error
(defn cost[x ref]
(+ (sqr (- (transfert x) ref))))
;trick to call solver by looking at last value
(vget (last
(solve
["w0" "b0" "w1"]
[ts_angle ts_target]
cost
2 ;few epochs
["adam"])))
;use learned NN to compute cosinus!
(transfert 0)
svd-s
Singular Value Decomposition, singular values.
Nice article on how to use it for Machine Leaning.
(svd-s matrix)
parameter
- matrix: a 2D tensor
svd-u
Singular Value Decomposition, U orthonormal base.
Nice article on how to use it for Machine Leaning.
(svd-s matrix)
parameter
- matrix: a 2D tensor
svd-v
Singular Value Decomposition, V orthonormal base.
Nice article on how to use it for Machine Leaning.
(svd-s matrix)
parameter
- matrix: a 2D tensor
tensor
Creates a tensor, generaly form 1D to 4D.
(tensor shape {values|fn})
Or combine several tensors to create a higher dimension tensor.
Each tensor must have the same shape, result will be n x tensor_dim.
(tensor t1 .. tn)
parameter
- shape: shape of tensor
- values: array of values, sequential (1D)
- fn: a function called sequentially with single index argument in [0 size[
- t1: a tensor
- tn: a tensor with same shape as previous
examples
Hard coded 2D matrices:
(tensor (shape 2 3) [1 2 4 3 6 0])
(tensor (shape 3 2) [1 2 4 3 6 0])
Index as value with an anonymous identity function:
(tensor (shape 3 2) (fn[i] i))
Randomly generated values with rand-g, index unused:
(tensor (shape 8 16) (fn[i] (rand-g 0 1)))
transpose
Tensor tranposition.
(transpose tensor)
parameter
- tensor: a tensor
example
(def t
(tensor (shape 2 3) [1 2 4 3 6 0]))
(transpose t)