Introducing Bright Wire
A while ago I created an open-source machine learning library in c#.
This was before TensorFlow and PyTorch became the unstoppable forces that they are today.
It was interesting to learn about how neural networks actually work, and it was (and remains) a challenging design problem - how to balance flexibility with performance in a medium sized library.
One of the design goals was to be able to run machine learning purely in .NET, and I used Bright Wire as the machine learning basis of a bespoke .NET natural language parsing pipeline.
What is Machine Learning?
Machine learning is a type of artificial intelligence (AI) that allows computers to learn without being explicitly programmed.
This is done by training the computer on a dataset of data and examples.
The computer will then use this data to learn how to make predictions or decisions.
There are many different types of machine learning algorithms, each with its own strengths and weaknesses.
Some common algorithms include linear regression, logistic regression, decision trees, and of course neural networks.
Why is Machine Learning Important?
Machine learning is important because it allows computers to solve problems that would be difficult or impossible to solve with traditional programming methods. For example, machine learning can be used to:
Classify images or text
Predict future events
Make recommendations
Control robots
Analyze data
Bright Wire Features
Neural networks
Feed Forward, Convolutional and Bidirectional network architectures
LSTM, GRU, Simple, Elman and Jordan recurrent neural networks
L2, Dropout and DropConnect regularisation
Relu, LeakyRelu, Sigmoid, Tanh and SoftMax activation functions
Gaussian, Xavier and Identity weight initialisation
Cross Entropy, Quadratic and Binary cost functions
Momentum, NesterovMomentum, Adagrad, RMSprop and Adam gradient descent optimisations
Bayesian
Naive Bayes
Multinomial Bayes
Multivariate Bernoulli
Markov Models
Unsupervised
K Means Clustering
Hierachical Clustering
Non Negative Matrix Factorisation
Random Projection
Tree Based
Decision Trees
Random Forest
K Nearest Neighbour classification