Sorry, Posts you requested could not be found...


Sorry, Not enough posts for this block, please add more posts...


Sorry, Not enough posts for this block, please add more posts...

- Advertisement -


Sorry, Posts you requested could not be found...

Editors' Take

Sorry, Posts you requested could not be found...

Sorry, Posts you requested could not be found...

- Advertisement -


Dev & DataUncategorized

What is an Artificial Neural Network?

Written by Savaram Ravindra, Content Lead at Mindmajix


A neural network is an assembly of simple processing units, nodes, or elements that are interconnected and whose functionality is based on the biological neuron. The network’s processing capability stored in the strength of inter-unit connections (weights) obtained by learning (the process of adoption) from a set of training patterns. Neural network systems perform computational tasks that are much faster than conventional systems and this is their objective. The examples of computational tasks are text to voice translation, zip code recognition, function approximation, and so on. This article provides an in-depth explanation of artificial neural networks. 

Artificial Neural Networks and their Importance

An Artificial Neural Network (ANN) is defined as a model for processing information that is inspired by the structure and functions of a biological neural network. The information processing system’s novel structure is the key element of this model. It consists of a large number of neurons (interconnected processing elements) working simultaneously to solve specific problems. The ANNs learn by example just like humans. 

Artificial neural networks are largely used for data modeling and statistical analysis. The role of ANNs in these techniques is perceived as an alternative to standard cluster analysis or nonlinear regression techniques. Hence, they are generally used in problems that may be formulated in terms of forecasting or classification. A few examples include textual character recognition, speech and image recognition, and domains of human expertise like financial market indicator prediction, medical diagnosis, geological survey for oil, and so on.

How does an Artificial Neural Network Work?

A neuron is the neural network’s fundamental processing element and encompasses a few general capabilities. A biological neuron basically acquires inputs from various other sources, integrates them and carries out a non-linear operation on the result. Then, it outputs the final result.

A Simple Neuron

There are many variations within humans on this basic neuron type further complicating their attempts at replicating the thinking process electrically. Yet, the natural neurons basically have four components. They are synapses, axon, soma (cell body), and dendrites. To the cell body, the hair-like extensions are dendrites and they act as input channels. The dendrites obtain their input via other neurons’ synapses. Over time, the cell body then handles these incoming signals and processes them thereby converting the value that is processed into an output. The output is then sent to other neurons via the axon and then the synapses.

From the present-day experimental data, it is evident that biological neurons are much more complex than the simple explanation given above. They are more complex than today’s artificial neurons in artificial neural networks. As technology advances and as the biology offers a better understanding of neurons, the network designers can enhance their systems by building upon the man’s understanding of the biological brain. The goal of today’s ANNs is not the extravagant recreation of the biological brain. The neural network researchers, on the contrary, are seeking an understanding of the capabilities of nature for which people can develop solutions to problems that haven’t been solved by conventional computing. To accomplish this, the artificial neurons which are the basic unit of ANNs simulate the four basic functions of biological neurons. The below image shows a fundamental representation of an artificial neuron.

A Basic Artificial Neuron

In the above figure, various inputs to the network are represented by x1, x2,..xn. These inputs are multiplied by weights of connection which are represented by w1, w2,…wn. These products are summed simply and fed via a transfer function (Threshold Unit) for result generation, and then output. This process contributes to a large scale physical implementation in a small package. This electronic implementation is still achievable with various other network structures as well which use different transfer and summing functions.

Few applications need binary or black and white answers. These applications include image deciphering of scenes, speech identification, and text recognition. For this kind of applications, the real-world inputs are turned into discrete values. The discrete values are limited to some known set such as the common 50,000 English words or the ASCII characters. These applications do not always use networks consisting of neurons that simply add up and smooth the inputs due to the limitation of output options. The binary properties of ANDing and ORing of inputs are used by these networks. These functions, as well as many others, can be integrated into the transfer and summation functions of a network.

The other networks work on problems in which the resolutions are not one of some known values. These types of networks must be capable of an unlimited number of responses. This type of application incorporates the intelligence behind robotic movements. The inputs are processed and the outputs which cause some device to move are created by this intelligence.

The movement of a device can span an unlimited number of precise motions. Indeed, these networks want to smooth their inputs which occur in interrupted bursts due to limitations of sensors (say 30 times a second for example). To accomplish that, they receive these inputs and sum the data thereby producing output by using a hyperbolic tangent as the transfer function. The network’s output values are uninterrupted and they satisfy the additional amount of real-world interfaces in this manner. Other applications may just add and compare to a threshold yielding one of two outputs that can be possible (a one or a zero).

Architectures of ANNs

The artificial neurons are arranged in a series of layers in an artificial neural network. Basically, an artificial neural network consists of an output layer, a hidden layer, and an input layer. The below figure shows the architecture of an ANN.

The Architecture of an Artificial Neural Network Image:

The input layer contains those artificial neurons that obtain input from the outside world upon which the network will process, recognize, or learn. The output layer consists of units that respond to the information regarding how it has learned any task. These units lie in between the input and output layers. The hidden layer alters the input into something that the output unit can utilize in some way.

Basically, there are four types of neural network architectures. They are single-layer feedforward architecture, multi-layer feedforward architecture, recurrent or feedback architecture, and mesh architecture. Single-Layer Feedforward networks consist of one input layer and one output layer (neural layer). The number of outputs will always coincide with the neuron number in networks that belong to this architecture. The applications include pattern classification and linear filtering. The networks with multi-layer feedforward architecture consist of one or more hidden neural layers. The applications include pattern classification, system identification, and so on.

In the networks with feedback architecture, the outputs of neurons are utilized as feedback inputs for other neurons in these networks. They are used for process control and other time-variant systems. The main features of a network with mesh structures reside in considering the neurons’ spatial arrangement for pattern extraction. This means the neurons’ spatial localization is directly related to the procedure of adjusting their synaptic thresholds and weights. These are used in problems such as data clustering, system optimization, and so on.

Few popular neural network architectures include multilayer perceptron, radial basis function network, perceptron, LTSM, and recurrent neural networks. A perceptron is also known as the single-layer perceptron. It is a neural network that contains one output unit and two input units with zero hidden layers. The radial basis function network is similar to the feedforward neural network. The only difference is that the radial basis function is used as an activation function of these neurons. Different from a single layer perceptron, the multilayer perceptron uses beyond one hidden layer of neurons. The other name for this network is the deep feedforward neural network. The hidden layer neurons are equipped with self-connections in a recurrent neural network. These networks own a memory. In LTSM (Long/Short Term Memory Network), the memory cell is integrated into hidden layer neurons.

The artificial neural network’s architecture defines how its neurons are placed or arranged in relation to each other. By directing the neurons’ synaptic connections, these arrangements are structured essentially. Within a particular architecture, the topology of a given neural network is defined as distinct structural compositions it can assume. Most of the neural networks are fully-connected structures. This implies that each of the hidden neurons is connected entirely to each neuron in the preceding layer(input) and the subsequent (output) layer.

Training Processes

In order to produce the desired output, the neural network learns by adjusting its bias and weights iteratively. The weights and thresholds are also known as free parameters. The neural network is trained first for learning to take place. The defined set of rules with which training is performed is known as the learning algorithm. The network, during its execution, will thus be able to extricate discriminant features about the system that is being mapped from samples obtained from the system. There are five types of learning in a neural network. They are online learning, offline learning, reinforcement learning, unsupervised learning, and supervised learning.

In supervised learning, the training data is the input to the network, and the expected output is known weights are adjusted until the output produces the desired value. In unsupervised learning, the network with a known output is trained with the help of input data. The network categorizes the data at the input and it regulates the weight through the extraction of features in input data. In reinforcement learning, the network gives feedback whether it is a right or wrong output though the output value is not known. This is semi-supervised learning.

In online learning, the adjustment of the threshold and weight vector is carried out only after each training sample is presented to the network. In offline learning, the adjustment of the threshold and weight vector is carried out only after the entire training set is presented to the network. This is also known as batch learning.

Learning Datasets

The learning datasets in an ANN include a test set, validation set, and a training set. A test set is a set of examples utilized to assess the fully specified network’s performance or to successfully apply in predicting the output with a known input. A training set is a set of examples utilized for learning (to fit the network parameters). During the ANNs’ training process, the entire presentation of all the samples that belong to the training set, to adjust the synaptic thresholds and weights, is known as training epoch. A validation set is a set of examples utilized to tune the network parameters.

Major learning Algorithms

The learning algorithms used in a neural network include backpropagation and gradient descent algorithms. They are a set of steps applied for adjusting the thresholds and weights of its neurons. A learning algorithm tunes the network so that its outputs are very close to the desired output values. Backpropagation is a prolongation of the delta learning rule based on gradient. After finding the variance between the target and desired (error), the error is transmitted back towards the input layer from the output layer through the hidden layer. Backpropagation is used for a multilayer neural network.

Gradient Descent is the simplest training algorithm employed in the case of the supervised training model. The error or difference is found out in case the actual output differs from the target output. The gradient descent algorithm transforms the weights of the network such that the error gets minimized. The other learning algorithms include competitive learning, Least Mean Square (LMS) algorithm, Hopfield law, Self-Organizing Kohonen rule, and Hebb rule.

Applications of ANNs

Artificial neural networks are commonly used for clustering, prediction, association, and classification. ANN can be used to determine a particular feature of data and designate them into various categories without any previous knowledge of data. ANNs are trained to produce outputs that can be expected from a given input (stock market prediction for example). ANNs can be trained to classify a given data set or pattern into a predefined class. ANNs can be trained to remember a specific pattern so that when the network is presented with a noise pattern, it associates the noise pattern with the closest pattern in the memory or discard it.

ANNs are being applied in many industries including medicine, business, mineral potential mapping, cooperative distributed environments, image processing, geotechnical problems, nanotechnology, aquatic ecology, analysis of thermal transient processes, and so on. In business, ANNs are used in the areas of credit evaluation and marketing. In medicine, they are used for diagnosing and modeling the cardiovascular system, the implementation of electronic noses, and so on.


This article has given you an in-depth overview of the artificial neural networks. It has started with explaining the neural networks and then moved on to ANNs and the reasons for using them. Later, we dealt with the architecture, training process, and then the applications. Thus, this article gives you a comprehensive overview of ANNs. To gain mastery in this subject and obtain a job in this area, it is recommended that you opt for machine learning training. Please let me know your thoughts by commenting in the comments section.

About the Author


Savaram Ravindra is working as a Content Lead at Mindmajix. His passion lies in writing articles on different niches which include some of the most innovative and emerging software technologies, digital marketing, businesses, and so on. Follow him on LinkedIn and Twitter.

read more
collaborationinnovationInnovation Management

Mapping Expertise and Illuminating Dark Assets

by Alanna Riederer

At some point in your life, you’ve found yourself describing a project you’ve worked on to a friend. They interject, “I’ve done something similar to this before,” and go on to describe a field or skill you didn’t know they were familiar with. You’ve just uncovered some dark assets about your friend: a set of skills or knowledge that were only discovered due to an accidental trigger.

This can be problematic when it comes to group projects, whether you’re working with an existing team or you’re putting one together. The people and tools available to you are limited to those you are aware of or those cataloged in scattered directories and lists across the internet. There are far more dark assets than known assets.

In order to build and branch teams more effectively and innovatively, we need two things: a map and a compass. We build a map so that we can see the dark assets. We equip ourselves with a compass to guide us towards relevant assets.

We like to use a network diagram as our map.

Screen Shot 2019-05-07 at 7.46.30 AM

We use these networks to map people and resources. People could be resources, but we tend to distinguish people from inanimate assets, like publications or technologies.

We dub these people and resources “entities.” Every entity has “attributes” that describe it. For instance, people have interests, skills, passions, publications, and projects associated with them. A publication has a date, an author list, an abstract, and key terms. As I list these out, imagine how connections would form in the network between entities across shared attributes.

In the network below, you can see some shared connections on technology, for-profit, javascript, music, and sustainability and unique perspectives of Education, Social Good, cello, art, and AI.

In addition to the map, we need the equivalent of a compass – finer tools to navigate this environment. These tools illuminate the entities that bring the most complementary skills to our team composition.

  • Suggestion algorithms allow us to find teammates that add complementary differences to our team.  This is helpful for deciding which entities we should focus on in our map.
  • Artifact-recording tools allow us to document and track ideas inside documents and see how they connect.
  • Termscapes are a richer map for navigating the content that our community generates or studies. They are generated by analyzing unstructured text about a collection of entities and arranging those entities into a landscape of their terms.

Using these tools allows us to remove the accidental nature of discovering important resources. What tools do you use or wish you had to approach this problem?

The images in this post are screenshots from Data+Creativity City, an application that captures connections between members of the Data+Creativity Meetup. If you’re a member, come join the City and see how you’re connected!

read more
%d bloggers like this: