# American Institute of Mathematical Sciences

2011, 2011(Special): 505-514. doi: 10.3934/proc.2011.2011.505

## Improvement of image processing by using homogeneous neural networks with fractional derivatives theorem

 1 University of Rzeszow, Institute of Technology, 35-959 Rzeszow, 16A Rejtana Str.

Received  July 2010 Revised  April 2011 Published  October 2011

The present paper deals with the unique circumvention of designing feed forward neural networks in the task of the interferometry image recog- nition. In order to bring the interferometry techniques to the fore, we recall briefly that this is one of the modern techniques of restitution of three di- mensional shapes of the observed object on the basis of two dimensional flat like images registered by CCD camera. The preliminary stage of this process is conducted with ridges detection, and to solve this computational task the discussed neural network was applied. By looking for the similarities in the biological neural systems authors show the designing process of the homogeneous neural network in the task of maximums detection. The fractional derivative theorem has been involved to assume the weight distribution function as well as transfer functions. To ensure reader that the theoretical considerations are correct, the comprehensive review of experiment results with obtained two dimensional signals have been presented too.
Citation: Zbigniew Gomolka, Boguslaw Twarog, Jacek Bartman. Improvement of image processing by using homogeneous neural networks with fractional derivatives theorem. Conference Publications, 2011, 2011 (Special) : 505-514. doi: 10.3934/proc.2011.2011.505
 [1] Ndolane Sene. Fractional input stability and its application to neural network. Discrete and Continuous Dynamical Systems - S, 2020, 13 (3) : 853-865. doi: 10.3934/dcdss.2020049 [2] Fangfang Dong, Yunmei Chen. A fractional-order derivative based variational framework for image denoising. Inverse Problems and Imaging, 2016, 10 (1) : 27-50. doi: 10.3934/ipi.2016.10.27 [3] Yuantian Xia, Juxiang Zhou, Tianwei Xu, Wei Gao. An improved deep convolutional neural network model with kernel loss function in image classification. Mathematical Foundations of Computing, 2020, 3 (1) : 51-64. doi: 10.3934/mfc.2020005 [4] Jianfeng Feng, Mariya Shcherbina, Brunello Tirozzi. Stability of the dynamics of an asymmetric neural network. Communications on Pure and Applied Analysis, 2009, 8 (2) : 655-671. doi: 10.3934/cpaa.2009.8.655 [5] Ying Sue Huang, Chai Wah Wu. Stability of cellular neural network with small delays. Conference Publications, 2005, 2005 (Special) : 420-426. doi: 10.3934/proc.2005.2005.420 [6] King Hann Lim, Hong Hui Tan, Hendra G. Harno. Approximate greatest descent in neural network optimization. Numerical Algebra, Control and Optimization, 2018, 8 (3) : 327-336. doi: 10.3934/naco.2018021 [7] Shyan-Shiou Chen, Chih-Wen Shih. Asymptotic behaviors in a transiently chaotic neural network. Discrete and Continuous Dynamical Systems, 2004, 10 (3) : 805-826. doi: 10.3934/dcds.2004.10.805 [8] Ying Zhang, Xuhua Ren, Bryan Alexander Clifford, Qian Wang, Xiaoqun Zhang. Image fusion network for dual-modal restoration. Inverse Problems and Imaging, 2021, 15 (6) : 1409-1419. doi: 10.3934/ipi.2021067 [9] Mikhail Gilman, Semyon Tsynkov. A mathematical perspective on radar interferometry. Inverse Problems and Imaging, 2022, 16 (1) : 119-152. doi: 10.3934/ipi.2021043 [10] Rui Hu, Yuan Yuan. Stability, bifurcation analysis in a neural network model with delay and diffusion. Conference Publications, 2009, 2009 (Special) : 367-376. doi: 10.3934/proc.2009.2009.367 [11] Hui-Qiang Ma, Nan-Jing Huang. Neural network smoothing approximation method for stochastic variational inequality problems. Journal of Industrial and Management Optimization, 2015, 11 (2) : 645-660. doi: 10.3934/jimo.2015.11.645 [12] Yixin Guo, Aijun Zhang. Existence and nonexistence of traveling pulses in a lateral inhibition neural network. Discrete and Continuous Dynamical Systems - B, 2016, 21 (6) : 1729-1755. doi: 10.3934/dcdsb.2016020 [13] Jianhong Wu, Ruyuan Zhang. A simple delayed neural network with large capacity for associative memory. Discrete and Continuous Dynamical Systems - B, 2004, 4 (3) : 851-863. doi: 10.3934/dcdsb.2004.4.851 [14] Weishi Yin, Jiawei Ge, Pinchao Meng, Fuheng Qu. A neural network method for the inverse scattering problem of impenetrable cavities. Electronic Research Archive, 2020, 28 (2) : 1123-1142. doi: 10.3934/era.2020062 [15] Sanjay K. Mazumdar, Cheng-Chew Lim. A neural network based anti-skid brake system. Discrete and Continuous Dynamical Systems, 1999, 5 (2) : 321-338. doi: 10.3934/dcds.1999.5.321 [16] Hiroaki Uchida, Yuya Oishi, Toshimichi Saito. A simple digital spiking neural network: Synchronization and spike-train approximation. Discrete and Continuous Dynamical Systems - S, 2021, 14 (4) : 1479-1494. doi: 10.3934/dcdss.2020374 [17] Lidong Liu, Fajie Wei, Shenghan Zhou. Major project risk assessment method based on BP neural network. Discrete and Continuous Dynamical Systems - S, 2019, 12 (4&5) : 1053-1064. doi: 10.3934/dcdss.2019072 [18] K. L. Mak, J. G. Peng, Z. B. Xu, K. F. C. Yiu. A novel neural network for associative memory via dynamical systems. Discrete and Continuous Dynamical Systems - B, 2006, 6 (3) : 573-590. doi: 10.3934/dcdsb.2006.6.573 [19] Hyeontae Jo, Hwijae Son, Hyung Ju Hwang, Eun Heui Kim. Deep neural network approach to forward-inverse problems. Networks and Heterogeneous Media, 2020, 15 (2) : 247-259. doi: 10.3934/nhm.2020011 [20] Danilo Costarelli, Gianluca Vinti. Asymptotic expansions and Voronovskaja type theorems for the multivariate neural network operators. Mathematical Foundations of Computing, 2020, 3 (1) : 41-50. doi: 10.3934/mfc.2020004

Impact Factor: