This is an Isosurface produced by superposing the field generated by a cylinder and one generated by random points in space.

Dodo is a plugin for Grasshopper where we are collecting scientific tools useful for computational designers. You can find the plugin comprehensive of some examples here and the full manual here. If you have any questions, ideas to implement or insults, do not hesitate to contact us!

Food4Rhino

Iso surfaces

An isosurface is the collection of points in space mapping to the same value. In a 2D domain the isosurface is an isoline and an example of them can be found in topographical maps where mountains and depression are represented as closed curves each one indicating a different height.
Isosurfaces are the equivalent of isocurves but are the result of a 3D domain where each point in space has a certain value. They connect all these points together thus forming one or multiple surfaces.

One of the main ways to produce these isosurfaces is using Marching Cubes or the theories generated from this one. In this plugin, Marching Tetrahedra have been used since they can manage better singular points.
Among the new implementations, there are some making use of derivatives of the scalar field given, but unfortunately the grasshopper Field component does not calculate derivatives.

Resources: Marching Tetrahedra of Paul Bourke.

Non-Linear Optimization

Non-linear optimization differs from linear optimization for the function it tries to minimize and/or the constraints it is subject to are non-linear, which is the case for almost several engineering application. NL-opt makes use of gradient-free algorithms to achieve this and in order to do that, it needs to calculate an approximate value of the gradient for a given point. Once this is done, it tries to move to a neighbour point according to the interpretation of the gradient given by the specific engine criteria.

This plugin uses the .NET implementations of the famous NLOpt library that you can find here, whilst the .NET implementation is here.

Optimization convergence rate

Artificial Neural Network

Neural Network Scheme
Neural Network Scheme

Artificial Neural Networks can date back to the second half of 1900 and take their name from their similarity with neuron interconnections in the brain. ANN are composed by a series of layers each containing a number of neurons, each of which connects to its peers in the layer before and after, as shown in the following picture. The example in the picture shows an ANN mapping data from a 3D domain to a 2D on by making use of a hidden layer. Hidden layers are those which are not directly connected with the input data nor with the output. An ANN can have any number of neurons and hidden layers which can bring to completely different results as well as increased computational time. Neurons’ connections are drawn as arrows pointing in the direction in which the data flows. Usually, neurons perform very basic operations and they are connected through value functions which are the targets ANNs optimize in order to fit the input data to the expected results.
The field of application of these ANN is data fitting, prediction and classification and their implementation is treated below.

These mentioned are not the only applications though as ANN can be used with unsupervised learning as Self-Organizing Map (SOM)

The library on which Dodo relies for Neural Networks can be found here.

ANN - Supervised Training

This type of learning algorithms uses sample inputs matched desired output values during the learning phase. The goal of this method is to shape to ANN so to provide a close fit output when given an input.

Delta Rule

Delta Rule
Delta Rule in 3D space

Delta Rule learning is one of the two threshold finders in Dodo’s ANN along with Perceptrons Rule.
In the example below the 8 vertices of a cube are assigned to two groups: group 0 above and group 1 below. The trained ANN finds the decision boundary depicted as a grey plane which splits the decision space in two. In the picture on the right, a grid of 11x11x11 is sampled and only the points having a predicted class of 1 are shown which demonstrates they are respectful of the boundary surface.

Perceptron

Delta Rule perceptron
Delta Rule Perceptron

In this example the NN is fed with a number of points in the 2D space, being part of three distinct groups which leads to a 3D solution space identified. To represent the group to which they are part of, a 3D vector can be used having for each dimension either 0 or 1. Moreover, in order to have a better graphical understanding, the vectors are multiplied by 255 so to transform them into colors.

Approximation

Supervised training can be used to make an ANN be able to predict the values of a series. In the example shown below 8 couples of numbers were used and plotted in teal. In red it is shown the approximating curve generated by the ANN trained using back-propagation learning. The two curves neatly superpose on the first part and diverge a bit on their ends, but this behaviour can change widely playing with the learning coefficients.

Prediction

For prediction, the procedure is the same as before but we feed the ANN with one single series of number but making it use sub-series of 5 numbers to predict the 6th and use the difference between the latter and the expected value to rate the learning. In the example below 16 values from a cosine have been used and in the picture underneath one can see how well the ANN predicts the following 34 values without a progressive increase of the error.

ANN - Unsupervised Training

The ANN is given sample inputs without expected results and it will organize itself in order to find patterns in this series. Dodo implements two algorithms for this category: SOM and Elastic Network.

Elastic Network

tsp solved with NN
Travel salesman problem using NN

Unsupervised training and elastic network can be used to find a solution to the travelling salesman problem. The elastic network is initially brought to the center of the data set, then slowly tries to replicate the values (position) basically working as a dumped spring system. The resulting weights are the Euclidean distances between the data points and the neurons’ output can be used how well the NN fit to the data samples. The neurons’ output can then be used to see how similar another dataset is to the first one and it is my understanding that this pattern recognition strategies have been successfully used to recognize cancer cells from pictures.

 Much more in the documentation!