Volume 9, Issue 1, June 2020, Page: 24-30
Use of Virtual Forward Propagation Network Model to Translate Analog Components
Muhammad Sana Ullah, Department of Electrical and Computer Engineering, Florida Polytechnic University, Lakeland, USA
William Brickner, Department of Electrical and Computer Engineering, Florida Polytechnic University, Lakeland, USA
Emadelden Fouad, Department of Natural Sciences, Florida Polytechnic University, Lakeland, USA
Received: Jun. 1, 2020;       Accepted: Jun. 17, 2020;       Published: Jul. 17, 2020
DOI: 10.11648/j.cssp.20200901.13      View  172      Downloads  48
Abstract
Neural computing is an emerging research topic today due to its massive increase in demand and applications for machine learning. In this virtual simulation research work, using a free software, a program has been trained a neural network model and translate its functionality into the hardware. In the context of analog neural network, this research seeks to verify a shift sigmoid function that can approximate the transfer function of CMOS inverter. By showing this approximation accurately and reducing the number of components, it would help to implement the neural network based integrated chips. A conciliation is selected for the distance matric of the proposed function. This distance metric between the given CMOS transfer function and the shifted sigmoid function is minimized using the gradient descent. However, this approximate transfer function of CMOS inverter is chosen to verify in a three-layer perceptron networks. The network topology randomly generates weights to provide a diverse set of truth tables. We report two networks whose weights are chosen randomly using a back propagation algorithm due to volatile nature of the network topology and the activation function. The results of this research conclude that the transfer function of CMOS inverter is able to approximate the CMOS transfer function adequately for the purposes of these perceptron networks.
Keywords
Analog Components, Artificial Neural Network, Machine Learning, Universal Gates, Virtual Network
To cite this article
Muhammad Sana Ullah, William Brickner, Emadelden Fouad, Use of Virtual Forward Propagation Network Model to Translate Analog Components, Science Journal of Circuits, Systems and Signal Processing. Vol. 9, No. 1, 2020, pp. 24-30. doi: 10.11648/j.cssp.20200901.13
Copyright
Copyright © 2020 Authors retain the copyright of this article.
This article is an open access article distributed under the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Reference
[1]
T. Tuma, A. Pantazi, M. Gallo, A. Sebastian and E. Eleftheriou, “Stochastic phase-change neurons,” Nature Nanotechnology, vol. 11, pp. 693-699, May 2016.
[2]
C. D. Wright, “Phase-change devices: Crystal-clear neuronal computing,” Nature Nanotechnology, vol. 11, pp. 655–656, May 2016.
[3]
I. Aleksander, Neural computing architectures: the design of brain-like machines, London: North Oxford Academic, 1989.
[4]
S. Furber and S. Temple, “Neural systems engineering,” Journal of The Royal Society Interface, vol. 4, no. 13, pp. 193–206, 2006.
[5]
H. P. Graf, L. D. Jackel and W. E. Hubbard, “VLSI Implementation of a neural network model,” Computer, vol. 21, no. 3, pp. 41-49, March 1988.
[6]
A. E. Pereda, “Electrical synapses and their functional interactions with chemical synapses,” Nature Reviews Neuroscience, vol. 15, no. 4, pp. 250-263, April 2014.
[7]
G. Gomes, T. Ludermir and L. Lima, “Comparison of new activation functions in neural network for forecasting financial time series,” Neural Computing and Applications, vol. 20, no. 3, pp. 417-439, April 2011.
[8]
B. M. Wilamowski, J. Binfet and M. O. Kaynak, “VLSI Implementation of Neural Networks,” International Journal of Neural Systems, vol. 10, no. 3, pp. 191-197, June 2000.
[9]
R. E. Maeder, The Mathematica Programmer, Academic Press, Inc., 1994.
[10]
K. Hirasawa, M. Ohbayashi, M. Koga and M. Harada, “Forward propagation universal learning network,” IEEE International Conference on Neural Networks, Washington, DC, USA, 3-6 June 1996.
[11]
M. Jabri, S. Pickard, P. Leong, G. Rigby, J. Jiang, B. Flower and P. Henderson,“VLSI implementation of neural networks with application to signal processing,” IEEE International Symposium on Circuits and Systems, pp. 1275-1278, 11-14 June 1991.
[12]
X. Li, J. Qin, B. Huang, X. Zhang and J. B. Bernstein, “A new SPICE reliability simulation method for deep submicrometer CMOS VLSI circuits,” IEEE Transactions on Device and Materials Reliability, Vol. 6, No. 2, pp. 247-257, June 2006.
[13]
M. Valle, “Analog VLSI Implementation of Artificial Neural Networks with Supervised On-Chip Learning,” Analog Integrated Circuits and Signal Processing, vol. 33, no. 3, pp. 263-287, December 2002.
[14]
B. Vines and M. H. Rashid, “Memristors: The fourth fundamental circuit element,” IEEE International Conference on Electrical and Electronics Engineering, Bursa, Turkey, 5-8 Nov. 2009.
[15]
A. Ardakani, F. Leduc-Primeau, N. Onizawa, T. Hanyu and W. J. Gross, “VLSI Implementation of Deep Neural Network Using Integral Stochastic Computing,” IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. PP, no. 99, pp. 1-12, 2017.
[16]
V. Balan, “A low-voltage regulator circuit with self-bias to improve accuracy,” IEEE Journal of Solid-State Circuits, vol. 38, no. 2, pp. 365-368, Feb 2003.
[17]
A. K. Shrinath, “Analog VLSI Implementation of Neural Network Architecture,” International Journal of Science and Research, vol. 4, no. 2, pp. 653-656, February 2015.
[18]
B. Razavi, Design of Analog CMOS Integrated Circuits, Second Edition, Mc Graw Hill Education, 2016.
[19]
J. Cho, Y. Jung, S. Lee and Y. Jung, “VLSI Implementation of Restricted Coulomb Energy Neural Network with Improved Learning Scheme,” Electronics, vol. 8, no. 563, pp. 1-13, May 2019.
[20]
M. Yamaguchi, G. Iwamoto, H. Tamukoh and T. Morie, “An Energy-efficient Time-domain Analog VLSI Neural Network Processor based on a Pulse-width Modulation Approach,” Computer Science, Emerging Technologies, Cornell University, pp. 1-13, February 2019.
[21]
Q. Wang, H. Tamukoh and T. Morie, “A Time-domain Analog Weighted – sum Calculation Model for Extremely Low Power VLSI Implementation of Multi-layer Neural Networks,” Computer Science, Emerging Technologies, Cornell University, October 2018.
[22]
S. Mada and S. Mandalika, "Analog Implementation of Artificial Neural Networks Using Forward Only Computation," Asia Modelling Symposium (AMS), Kota Kinabalu, 4-6 December 2017, pp. 3-9.
Browse journals by subject