<?xml version="1.0" encoding="ISO-8859-1"?><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id>0012-7353</journal-id>
<journal-title><![CDATA[DYNA]]></journal-title>
<abbrev-journal-title><![CDATA[Dyna rev.fac.nac.minas]]></abbrev-journal-title>
<issn>0012-7353</issn>
<publisher>
<publisher-name><![CDATA[Universidad Nacional de Colombia]]></publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id>S0012-73532009000400027</article-id>
<title-group>
<article-title xml:lang="en"><![CDATA[NEURAL NETWORK BASED SYSTEM IDENTIFICATION OF A PMSM UNDER LOAD FLUCTUATION]]></article-title>
<article-title xml:lang="es"><![CDATA[MODELAMIENTO BASADO EN REDES NEURONALES DE UN PMSM BAJO FLUCTUACIONES DE CARGA]]></article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname><![CDATA[QUIROGA]]></surname>
<given-names><![CDATA[JABID]]></given-names>
</name>
<xref ref-type="aff" rid="A01"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[CARTES]]></surname>
<given-names><![CDATA[DAVID]]></given-names>
</name>
<xref ref-type="aff" rid="A02"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[EDRINGTON]]></surname>
<given-names><![CDATA[CHRIS]]></given-names>
</name>
<xref ref-type="aff" rid="A03"/>
</contrib>
</contrib-group>
<aff id="A01">
<institution><![CDATA[,Universidad Industrial de Santander  ]]></institution>
<addr-line><![CDATA[Bucaramanga ]]></addr-line>
<country>Colombia</country>
</aff>
<aff id="A02">
<institution><![CDATA[,Florida State University  ]]></institution>
<addr-line><![CDATA[Tallahassee ]]></addr-line>
<country>United States</country>
</aff>
<aff id="A03">
<institution><![CDATA[,Florida State University  ]]></institution>
<addr-line><![CDATA[Tallahassee ]]></addr-line>
<country>United States</country>
</aff>
<pub-date pub-type="pub">
<day>00</day>
<month>12</month>
<year>2009</year>
</pub-date>
<pub-date pub-type="epub">
<day>00</day>
<month>12</month>
<year>2009</year>
</pub-date>
<volume>76</volume>
<numero>160</numero>
<fpage>273</fpage>
<lpage>282</lpage>
<copyright-statement/>
<copyright-year/>
<self-uri xlink:href="http://www.scielo.org.co/scielo.php?script=sci_arttext&amp;pid=S0012-73532009000400027&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://www.scielo.org.co/scielo.php?script=sci_abstract&amp;pid=S0012-73532009000400027&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://www.scielo.org.co/scielo.php?script=sci_pdf&amp;pid=S0012-73532009000400027&amp;lng=en&amp;nrm=iso"></self-uri><abstract abstract-type="short" xml:lang="es"><p><![CDATA[La técnica de redes neuronales es usada para modelar un PMSM. Una red recurrente multicapas predice el componente fundamental de la señal de corriente un paso adelante usando como entradas el componente fundamental de las señales de voltaje y la velocidad del motor. El modelo propuesto de PMSM puede ser implementado en un sistema de monitoreo de la condición del equipo para realizar labores de detección de fallas, evaluación de su integridad o del proceso de envejecimiento de éste. El modelo se valida usando un banco de pruebas para PMSM de 15 hp. El sistema de adquisición de datos es desarrollado usando Matlab®/Simulink® con dSpace® como interfase con el hardware. El modelo mostró capacidades de generalización y un desempeño satisfactorio en la determinación de las componentes fundamentales de las corrientes en tiempo real bajo condiciones de no carga y fluctuaciones de esta.]]></p></abstract>
<abstract abstract-type="short" xml:lang="en"><p><![CDATA[A neural network based approach is applied to model a PMSM. A multilayer recurrent network provides a near term fundamental current prediction using as an input the fundamental components of the voltage signals and the speed. The PMSM model proposed can be implemented in a condition based maintenance to perform fault detection, integrity assessment and aging process. The model is validated using a 15 hp PMSM experimental setup. The acquisition system is developed using Matlab®/Simulink® with dSpace® as an interface to the hardware, i.e. PMSM drive system. The model shows generalization capabilities and a satisfactory performance in the fundamental current determination on line under no load and load fluctuations.]]></p></abstract>
<kwd-group>
<kwd lng="es"><![CDATA[Identificación de Sistemas]]></kwd>
<kwd lng="es"><![CDATA[PMSM]]></kwd>
<kwd lng="es"><![CDATA[Redes Neuronales]]></kwd>
<kwd lng="es"><![CDATA[Redes Recurrentes]]></kwd>
<kwd lng="en"><![CDATA[System]]></kwd>
<kwd lng="en"><![CDATA[Identification]]></kwd>
<kwd lng="en"><![CDATA[PMSM]]></kwd>
<kwd lng="en"><![CDATA[Neural Network]]></kwd>
<kwd lng="en"><![CDATA[Recurrent Networks]]></kwd>
</kwd-group>
</article-meta>
</front><body><![CDATA[ <p align="center"><font size="4" face="Verdana, Arial, Helvetica, sans-serif"><b>NEURAL  NETWORK BASED SYSTEM IDENTIFICATION OF A PMSM UNDER LOAD FLUCTUATION</b></font></p>     <p align="center"><i><font size="3" face="Verdana, Arial, Helvetica, sans-serif"><b>MODELAMIENTO  BASADO EN REDES NEURONALES DE UN PMSM BAJO FLUCTUACIONES DE CARGA</b></font></i></p>     <p align="center">&nbsp;</p>     <p align="center"><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b>JABID QUIROGA</b>    <br>   <i>Profesor asociado, Universidad Industrial de Santander, Bucaramanga-     Colombia, <a href="mailto:jabib@uis.edu.co">jabib@uis.edu.co</a></i></font></p>     <p align="center"><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b>DAVID CARTES</b>    <br> </font><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><i>Profesor asociado, Ingenier&iacute;a Mec&aacute;nica, Florida State University, Tallahassee, United States, e-mail:<a href="mailto:dave@eng.fsu.edu">dave@eng.fsu.edu</a></i> </font></p>     <p align="center"><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b>CHRIS EDRINGTON</b>    <br> </font><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><i>Profesor , Ingenier&iacute;a El&eacute;ctrica y Computacional, Florida State University, Tallahassee, United States, <a href="mailto:edrinch@eng.fsu.edu">edrinch@eng.fsu.edu</a></i></font></p>     <p align="center">&nbsp;</p>     ]]></body>
<body><![CDATA[<p align="center"><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b>Recibido  para revisar julio 25 de 2008, aceptado mayo  21 de 2009, versi&oacute;n final junio 19 de 2009</b></font></p>     <p>&nbsp;</p> <hr>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b>RESUMEN:</b> La t&eacute;cnica de redes neuronales es  usada para modelar un PMSM. Una red  recurrente multicapas predice el componente fundamental de la señal de  corriente un paso adelante usando como entradas el componente fundamental de  las señales de voltaje y la velocidad del motor. El modelo propuesto de PMSM puede  ser implementado en un sistema de monitoreo de la condici&oacute;n del equipo para  realizar labores de detecci&oacute;n de fallas, evaluaci&oacute;n de su integridad o del  proceso de envejecimiento de &eacute;ste. El modelo se valida usando un banco de  pruebas para PMSM de 15 hp. El sistema de adquisici&oacute;n de datos es desarrollado  usando Matlab<sup>®</sup>/Simulink<sup>®</sup> con dSpace® como interfase con el hardware. El modelo mostr&oacute; capacidades de generalizaci&oacute;n  y un desempeño satisfactorio en la determinaci&oacute;n de las componentes  fundamentales de las corrientes en tiempo real bajo condiciones de no carga y  fluctuaciones de esta.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b>PALABRAS CLAVE:</b> Identificaci&oacute;n de Sistemas, PMSM, Redes Neuronales, Redes  Recurrentes.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b>ABSTRACT:</b> A neural network based approach is applied to model a PMSM. A multilayer  recurrent network provides a near term fundamental current prediction using as  an input the fundamental components of the voltage signals and the speed. The PMSM  model proposed can be implemented in a condition based maintenance to perform  fault detection, integrity assessment and aging process. The model is validated using a 15 hp PMSM experimental  setup<i>. </i>The acquisition system  is developed using Matlab<sup>®</sup>/Simulink<sup>®</sup> with dSpace<sup>®</sup> as an interface to the hardware, i.e. PMSM drive system<i>.</i> The model shows generalization capabilities  and a satisfactory performance in the fundamental current determination on line  under no load and load fluctuations. </font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b>KEYWORDS: </b>System, Identification,  PMSM, Neural Network, Recurrent Networks.</font></p>   <hr>     <p>&nbsp;</p>     <p><font size="3" face="Verdana, Arial, Helvetica, sans-serif"><b>1. INTRODUCTION</b></font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">The number of  applications of Permanent Magnet Synchronous Machines (PMSM) is steadily  increasing as a result of the advantages attributed to this type of motor. PMSMs  are found in power and positioning applications such as ship propulsion  systems, robotics, machine tools, etc. The main reason PMSM is so </font><font size="2" face="Verdana, Arial, Helvetica, sans-serif">attractive is due  to its physical construction, which consist of permanent magnets mounted onto the  rotor. This arrangement improves the efficiency and performance. PMSM presents  several advantages compared with the induction motor, the most popular  electromechanical actuator, such as: high power density, high air-gap flux  density, high-torque/inertia ratio, low package weight, less copper losses, high </font><font size="2" face="Verdana, Arial, Helvetica, sans-serif">efficiency, and  a small rotor for the same power output.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Many real-world  applications, such as adaptive control, adaptive filtering, adaptive prediction  and Fault  Detection and Diagnosis systems (FDD) require a model of the system to be available online while the system  is in operation. The NN based PMSM model proposed in this study can be  implemented particularly as a component of a FDD model based system to  monitoring the electric condition of a PMSM and evaluate motor aging.</font></p>     ]]></body>
<body><![CDATA[<p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">The basic idea of  a model based FDD is to compare measurements with computationally obtained  values of the corresponding variables, from which residual signals can be  constructed. The residuals provide the information to detect the fault. </font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">In fact, in a  FDD system the residuals are generated by the comparison between a  computational variable multiple time steps ahead (MSP) into the future with the  present value of the variable. This time ahead is required to consider the  computational time spent by the model to produce on line the signal to be  compared. MSP is performed using a recursive approach based on a dynamic  recurrent neural network &#91;1&#93;. This recursive approach is followed in this study  and it is one of the advantages of the neural networks compared with other  techniques such as support vector machines which training is a batch algorithm  and does not exist a recursive algorithm &#91;2&#93;. </font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">The modeling  process for complex systems such as PMSM under load fluctuation demands methods  which deal with high dimensionality, nonlinearity, and uncertainty. Therefore,  alternative techniques to traditional linear and nonlinear modeling methods are  needed. </font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">System identification  is an experimental approach for determining the dynamic of a system from  measured input/output data sets. It includes: experimental data sets, a  particular model structure, the estimation of the model parameters and finally  the validation of the identified model. A complete system identification  process must cover the items mentioned above &#91;3&#93;.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">One such approach  is Neural Networks (NN) modeling. NN are powerful empirical modeling tools that  can be trained to represent complex multi-input multi-output nonlinear systems.  NN have many advantageous features including parallel and distributed  processing and an efficient non-linear mapping between inputs and outputs &#91;4&#93;. </font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">NN have been  also used in control applications. In &#91;4-6&#93; A multilayer feedfoward artificial  neural networks speed PID controller for a PMSM are presented. In &#91;4&#93; on-line NN  self tunning is developed and the NN is integrated with the vector control  scheme of the PMSM drive. In &#91;6-8&#93; an on-line adaptive NN based vector control  of a PMSM are proposed. In this application the NN play both roles, system  identification and speed control.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Various types of NN structures have been  used for modeling dynamic systems. Multilayers NN are universal approximator  and have been utilized to provide an input-output representation of complex  systems. Among the available multilayer  NN architectures the recurrent network has shown to be more robust than a plain  feedfoward network when taking the accumulative error into account &#91;8&#93;. </font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">In &#91;1&#93; is proposed to use a dynamic recurrent network  in the form of an IIR filter as a multi-step predictor for complex systems.  Present and delayed observations of the measured system inputs and outputs, are  utilized as inputs to the network. The proposed architecture includes local and  global feedback.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Some NN structures for modeling electrical motors have  been proposed previously. Most of them have been focused in modeling induction  machines. In &#91;9&#93; a NN model of an induction motor based on a NARX structures is  used to simulate the speed using as input the voltage signal. In &#91;10&#93; a NN model for simulating the three  phase currents in an induction motor is proposed based on multilayer recurrent  NN; however, the load fluctuation is not addressed.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">In this paper, a  near term fundamental current predictor of a PMSM is proposed using a recurrent  (global and local feedback) multilayer network with delayed connections of the  voltage and the speed signals as inputs. This architecture provides to the  neural network the ability to capture the complex dynamic associated to the  operation of the PMSM under load fluctuation.</font></p>     ]]></body>
<body><![CDATA[<p>&nbsp;</p>     <p><font size="3" face="Verdana, Arial, Helvetica, sans-serif"><b>2. METHODOLOGY</b></font></p> <font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b>2.1 NN for Sytem Identification&#93;    <br> </b>In the last decade there has been a growing   interest in identification methods based on neural networks &#91;11&#93;. The recent   success of dynamic recurrent neural networks as semiparametric approximators   for modeling highly complex systems offers the potential for broadening the   industrial acceptance of model-based system identification methods &#91;12&#93;. Neural   networks are universal approximators in that a sufficiently large network can   implement any function to any desired degree of accuracy. By presenting a   network with samples from a complex system and training it to output subsequent   values, the network can be trained to approximate the dynamics, which underlie   the system. The network, once trained, can then be used to generalize and  predict states that it has not been exposed to.      <p>The use of NN as   a modeling tool involves some issues such as: NN architecture, the number of   neurons and layers, the activation functions, the appropriate training data set  and the suitable learning algorithm. </p>     <p>Recurrent   networks are multilayer networks which have at least one delayed feedback loop.   This means an output of a layer feeds back to any proceeding layer. In   addition, some recurrent networks have delays inputs (Recurrent dynamic   network). These delays give the network partial memory due to the fact that the   hidden layers and the input layer receive data at time <i>t</i> but also at   time <i>t-p</i>, where <i>p</i> is the number of delayed samples. This makes  recurrent networks powerful in approximating functions depending on time. </p>     <p>From the   computational point of view, a dynamic neural structure that contains feedback   may provide more computational advantages than a static neural structure, which   contains only a feedforward neural structure. In general, a small feedback   system is equivalent to a large and possibly infinite feedforward system &#91;11&#93;.   A well-known example is an infinite <i>order finite impulse response </i>(FIR)  filter is required to emulate a single-pole <i>infinite impulse response </i>(IIR). </p> </font>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b>A. The effect of load change on a Synchronous Motor    <br>   </b>If     a load is attached to the shaft of a synchronous motor, the motor will develop     enough torque to keep the motor and its load turning at a synchronous speed. If     the load on the shaft is increased, the rotor will initially slow down and the     induced torque increases. The increase in induced torque eventually speeds the     rotor back up, and the motor again turns at synchronous speed but with a large     torque. <a href="#fig01">Figure 1</a> shows the behavior of the speed under a load fluctuation in     the PMSM used in this study. The load is applied using a ramp of 2 seconds from     no load condition to 20% of the rated torque and a ramp down of 2 seconds from     20% of the rated torque to no load condition. It should be noted, the settling     time of the speed control implemented in the motor is around 2 seconds. This     value is of great importance in the determination of the training set for the     training of the neural network. The raise in the torque at constant speed yields the increasing of the input power to the machine via current increase.</font></p>     <p align="center"><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b><a name="fig01"></a><img src="/img/revistas/dyna/v76n160/a27fig01.gif">    <br>   Figure 1.</b> Effect of the   load in the speed in PMSM(Torque=20% of rated value)</font></p>     ]]></body>
<body><![CDATA[<p><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b>2.2 Neural Network Model Development </b>       <br>   Because of the complexity of the dynamic behavior in the PMSM under load fluctuation and the difficulty associated in establishing an exact mathematical formulation to develop an explicit model of the PMSM with conventional methods, a nonlinear empirical model using a NN is developed. In this paper, it is proposed to utilize a multi-layer dynamic recurrent NN with local feedback of the hidden nodes and global feedback, as shown in <a href="#fig02">Figure 2</a>. Local feedback implies use of delayed hidden node outputs as hidden node inputs, whereas global feedback is produced by the connection of delayed networks outputs as network inputs. This architecture provides a network in the form of a nonlinear infinite impulse response (IIR) filter. </font></p> <font face="Verdana, Arial, Helvetica, sans-serif"><font size="2"> </font></font>     <p align="center"><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b><a name="fig02"></a><img src="/img/revistas/dyna/v76n160/a27fig02.gif">    <br>   Figure 2.</b> General   structure of multilayer NN</font></p> <font face="Verdana, Arial, Helvetica, sans-serif"><font size="2">     <p>The operation of a recurrent NN predictor that employs global feedback can be represented as (1):</p>     <p><img src="/img/revistas/dyna/v76n160/a27eq01.gif"></p>     <p>where F(•) represents the nonlinear mapping of the   NN, <i>u</i> is the inputs , <sub><img src="/img/revistas/dyna/v76n160/a27eq002.gif"></sub> is the simulated values and <i>W</i> is the parameters associated to the NN. </p>     <p>This NN architecture provides the capability to   predict the output several steps into the future without the availability of   actual outputs. Empirical models with predictive capabilities are desirable in   fault monitoring and diagnosis applications. The implemented NN consists of an   input layer, a hidden layer, and an output layer. Each of the processing elements of a MLP network is governed by (2).</p>     <p><img src="/img/revistas/dyna/v76n160/a27eq02.gif"></p>     <p>for <i>i </i>= 1,&#8230;,<i>N<sub>&#91;l&#93;</sub></i> (the   node index), and <i>l = </i>1,&#8230;.,<i>l</i> (the layer index), where <i>x</i><sub>&#91;</sub><i><sub>l,i</sub></i><sub>&#93;</sub> is the <i>i</i><sup>th</sup> node output of the <i>l</i><sup>th</sup> layer, <i>b</i>&#91;<i>l,i</i>&#93; is   the bias, and <i>s</i><sub>&#91;</sub><i><sub>l,i</sub></i><sub>&#93;</sub>(•) is   the activation function of the <i>i</i><sup>th</sup> node in the <i>l</i><sup>th</sup> layer. The relationship between inputs and outputs in a multilayer NN can be expressed using a general nonlinear input-output model, (3):</p>     ]]></body>
<body><![CDATA[<p><img src="/img/revistas/dyna/v76n160/a27eq03.gif"></p>     <p>where <i>W</i> is the weight matrix determined by the   learning algorithm, <i>f</i> (•) represents the nonlinear mapping of the vector   input using any activation function. In this study, the <i>tansig </i>function is used in the hidden layer and <i>purelin</i> is used in the output layer. The input vector is defined as:</p>     <p><img src="/img/revistas/dyna/v76n160/a27eq04.gif"></p>     <p>where <i>NS</i> represents a non-stationary signal, <sub><img src="/img/revistas/dyna/v76n160/a27eq004.gif"></sub>are the actual normalized values of the 3 phases line voltages:</p>     <p><img src="/img/revistas/dyna/v76n160/a27eq05.gif"></p>     <p>The normalized values of currents and voltages are   obtained through the relation between the present current and voltage data and the   maximum values of current and voltage respectively. The limit values of current   and voltage are getting from the PMSM data sheet used in the test bench. The vector <sub><img src="/img/revistas/dyna/v76n160/a27eq006.gif"></sub>is the three normalized predicted phase currents: </p>     <p><img src="/img/revistas/dyna/v76n160/a27eq06.gif"></p>     <p>The variable <i>v<sup>NS</sup></i> is the normalized   rotational velocity of the rotor with respect to the maximum values indicated   in the data sheet. The hidden layer is   composed of 6 neurons with delayed local feedback employed in each neuron. The   hidden layer node number is chosen considering the balance of accuracy and   network size. The output layer, with   global feedback, has 3 nodes, which correspond to the three phase-current predictions as shown in (7).</p>     <p><img src="/img/revistas/dyna/v76n160/a27eq07.gif"></p> </font></font><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b>2.3 </b> <b>Model Training and Validation    <br> </b></font><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Generally,   training recurrent dynamic networks is computationally intensive and in this   work has been difficult due to the time dependencies present in their   architectures. Recurrent networks exhibit complex error surfaces characterized   by very narrow valleys which bottoms are often cusps. Additionally, initial   conditions assigned in the training stage and variations in the input sequence   can produce spurious valleys in the error surface &#91;14&#93;. </font>     ]]></body>
<body><![CDATA[<p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">The goal of the NN training is to produce a network,   which yield small errors on the training set, but which will also respond   properly to novel inputs (regulation). Therefore, in order to provide   appropriate training of the model consideration of issues such as: regulation,   initial values of the parameters, as well as the need to train the NN several   times, must be addressed in order to achieve optimal results.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">The neural network model proposed is trained using   Bayesian regulation conveniently implemented within the framework of the   Levenberg-Marquardt algorithm.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Regulation is used to avoid an over fitted network and   to produce a network that generalizes well &#91;15&#93;. This approach constrains the   size of the network weights, adding a penalty term proportional to the sum of   the squares of the weights (<i>msw</i>) and biases to the performance function.   As can be seen, the objective function becomes a maximum penalty likelihood   estimation procedure as shown in (8).</font></p>     <p><img src="/img/revistas/dyna/v76n160/a27eq08.gif"></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">where <i>a </i>and <i>b</i> are objective   function parameters. This approach provides to the neural network a smooth   response. The values of a and b determine the response of the NN. When a &lt;&lt; b, over fitting of the NN occurs. If a &gt;&gt; b the NN does not adequately fit the training   data. In &#91;16&#93; one approach is proposed to determine the optimal regulation   parameter based on a <i>Bayesian framework</i>. In this framework, the weights   and biases of the network are assumed to be random variables with specified   distributions. The regularization parameters are related to the unknown   variances associated with these distributions. These parameters can be   estimated using statistical techniques. </font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">The   Levenberg-Marquardt algorithm is a variation of   Newton&#8217;s method and was designed for   minimizing functions that are sums of squares of other nonlinear functions &#91;17&#93;.   The algorithm speeds up the training by employing an approximation of the   Hessian matrix (9). The gradient is computed via (10), </font></p>     <p><img src="/img/revistas/dyna/v76n160/a27eq0910.gif"></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">where <sub><img src="/img/revistas/dyna/v76n160/a27eq008.gif"></sub> is the Jacobian  matrix that contains first derivatives of the network errors with respect to  the weights and biases, and <i>e</i> is a vector of network errors.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">The Jacobian  matrix is much less complex than computing the Hessian matrix. The  Levenberg-Marquardt algorithm uses this approximation to the Hessian matrix in  the following Newton-like update:</font></p>     <p><img src="/img/revistas/dyna/v76n160/a27eq11.gif"></p>     ]]></body>
<body><![CDATA[<p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">where <i>x<sub>k</sub></i> is a vector of current  weights and biases. The Levenberg-Marquardt algorithm is an accommodative  approach between Gauss-Newton&#8217;s method (faster and more accurate near an  error minimum) and gradient descent method (guaranteed convergence) based on  the adaptive value of <i>m</i>. If scalar <i>m</i> is zero  then the update process described by (11) resembles Newton&#8217;s method using the approximate  Hessian matrix. If <i>m</i> is large,  then this process becomes gradient descent method with a small step size.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">A complete description of the Levenberg-Marquardt  Backpropagation (LMBP) algorithm can be found in &#91;15&#93;. A detailed discussion about the  implementation of Bayesian Regulation in combination with Levenberg-Marquardt  training is presented in &#91;15&#93;.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">During the NN  model training each layer&#8217;s weights and biases are initializing according  to the method proposed for Nguyen and Widrow in &#91;18&#93;. This method for setting  the initial weights of hidden layers of a multilayer neural network provides a  considerable reduction in training time. Using the Nguyen and Widrow  initialization algorithm, the values of weights and bias are assigned at the  beginning of the training, then the network is trained and each hidden neuron still  has the freedom to adjust its own values during the training process.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">The proposed NN is trained offline using the collected  values at 625 Hz of sampling frequency followed by scaling in the  range of </font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">&#91;-1:1&#93; of the magnitude of the fundamental components  of the voltage and the rotational velocity as inputs. The targets are the  normalized values of the magnitudes of the fundamental components of the three  phase currents. The training data set consists of 5487 samples; the number of  parameters to be calculated (weights and biases) during the training stage is  579 for the neural network proposed.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Tests in the lab showed a dependency between the percentages  of the rated load applied using a ramp and the speed settling time. Particularly,  for a larger load the settling time is also bigger. PMSM speed settling time  plays an important role in the dynamic behavior of the currents. It is observed  multiple current values for a unique value of speed when the load is applied in  a ramp using a time below the settling time. </font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Furthermore, longer time involves more number of  samples to take in a loading process of the motor. In addition, the memory  requirement of the Levenberg-Marquardt algorithm is relatively large because LM  uses the Jacobian matrix which in the case of the network implemented has  dimensions of <i>Q </i>x <i>n</i>, where <i>Q</i> is the number of the training sets and <i>n</i> is the number of weights and biases. This large matrix restricts the size of the  training set due to limitations in processing capacity of the test bench. </font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Because the reasons explained above the training set  is chosen under size consideration. The training data set is comprised of  measurements taken by loading the motor between no-load and 30% of the rated  torque applied by ramping for 2 seconds, followed by 2 seconds of constant 30 %  of rated torque and then a ramp down for 2 seconds to no-load, as shown in <a href="#fig03">Figure  3</a>. Currently, the torque ramp up and down configuration has been implemented in  synchronic machines which startup is executed under torque control. This arrangement  has the benefit that the mechanical starting behavior of the equipment driven  by the motor will be much softer than when using a step torque for starting and  stopping. Additionally the torque ramp is chosen to protect the test bench. Test  showed a mechanical impact in the PMSM produced by the quick change between no  load and load condition when the torque step is applied.</font></p>     <p align="center"><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b><a name="fig03"></a><img src="/img/revistas/dyna/v76n160/a27fig03.gif">    <br>   Figure 3.</b> Load applied to   obtain the training set</font></p>     ]]></body>
<body><![CDATA[<p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">The criterion established for checking if the network has been trained  for a sufficient number of iterations to ensure convergence is obtained when  the values of the Sum Squared Error (<i>SSE</i>) is low and is relatively  constant over at least ten epochs. High variation in this term after each  iteration is a clear sign of an unstable convergence. Additionally the training  algorithm used provides a  measure of how many network parameters (weights and biases) out of the total  are being effectively used by the network. This effective number of parameters  should remain approximately the same, no matter how large the number of parameters  in the network becomes. (This assumes that the network has been trained for a  sufficient number of iterations to ensure convergence.) In the case of the  performed training, the neural network achieved convergence when the value of <i>SSE </i>was approximately 1. The use of regulation avoids the need to use a  validation data set.</font></p> <font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b>2.4 Experimental Approach</b></font>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">In the proposed system identification system, the data   acquisition system allows the sampling of <i>V<sup>NS </sup></i>(<i>t</i>), <i>I<sub>f</sub><sup>NS</sup></i>(<i>t</i>)   and <i>v(t)</i>. The signals are sampled   at 625 Hz and the voltage and current signals are filtered using bandpass   filters to obtain the fundamental components of each phase i.e. <i>V<sub>f</sub> <sup>NS</sup></i>(<i>t</i>) and <i>I<sub>f</sub><sup>NS</sup></i>(<i>t</i>). </font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">The proposed neural network model is experimentally   validated using a system which consists of a 28.8 kVA variable frequency drive connected   to an 11.25 kW, 640 V, 60 Hz, Y<i>-</i>connected 8-pole PMSM. A   dc motor is mechanically coupled to the PMSM to serve as a load (see <a href="#fig04">Figure 4</a>). </font></p>     <p align="center"><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b><a name="fig04"></a><img src="/img/revistas/dyna/v76n160/a27fig04.gif">    <br> Figure 4</b>. PMSM experimental test bed</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">During the experiments, the load is changed by   varying the armature resistance of the dc motor, in order to   emulate a load fluctuation condition, e.g. increasing or decreasing the load   from 0% up to 45%. </font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">The system is   developed using MATLAB<sup>â</sup>/Simulink   with dSPACE<sup>Ò</sup> as an interface to the data acquisition hardware and PMSM drive system. The fully developed model   is applied to the electrical system and performance can be studied in dSPACE<sup>Ò</sup>, which is used to display and record   the line voltages, line currents, predicted values of current and the torque   signal.</font></p>     <p>&nbsp;</p>     <p><font size="3" face="Verdana, Arial, Helvetica, sans-serif"><b>3. EXPERIMENTAL RESULTS</b></font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">A series of tests are designed to demonstrate the   robustness and performance using the proposed system covering a wide variety of   operating conditions at different load levels.</font></p>     ]]></body>
<body><![CDATA[<p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">In testing the performance of the developed network,   the normalized mean square error and the absolute mean error are used. The   testing data set comprises of measurements obtained from no load to 10%, 20%,   30%, 40%, and 45% of the rated torque respectively, which are entirely   different than the ones used in the training data set in order to evaluate the   generalization performance of the network. In addition, a series of tests are   performed for 45% of the rated torque using different ramp slopes to introduce   the load to the PMSM. Although the ramps are different to the ones used in the training   stage, all of them are configured to produce ramps in a greater time to the settling   time. The results are summarized in <a href="#tab01">Tables I</a> and <a href="#tab02">II</a> in terms of <i>MSE</i> (mean squared error) and <i>Mean</i> error. Tables I and II demonstrate   the generalization performance of the network up to 45% of the rated torque.</font></p>     <p align="center"><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b><a name="tab01"></a>Table   1</b>. Generalization performance of the network from   no torque up to 45% of the rated torque</font>    <br>   <img src="/img/revistas/dyna/v76n160/a27tab01.gif"></p>     <p align="center"><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b><a name="tab02"></a>Table   2</b>. Generalization performance of the network for   45% of the rated torque using different ramp slopes</font>    <br>   <img src="/img/revistas/dyna/v76n160/a27tab02.gif"></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">In <a href="#fig05">Figure 5</a> is shown the efficacy of the model   implemented in tracking the variations of the current in phase A, when the load   coupled to the motor is changing. Figure 5 shows the actual value of current <i>I<sub>a</sub></i>, the simulated value of   the current <i>I<sub>asim</sub></i>, the   variation in the torque and the residuals. </font></p>     <p align="center"><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b><a name="fig05"></a><img src="/img/revistas/dyna/v76n160/a27fig05.gif">    <br>   Figure   5.</b> Actual and simulated fundamental component current   phase A under load fluctuation</font>    <br> </p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><a href="#fig06">Figures 6</a>-<a href="#fig11">11</a>  show the deviation in the simulated current for a load fluctuation condition in   each phase via residuals magnitude. The residuals or errors are produced by   comparing the three phase current predictions and the actual values of the   three phase currents. The residual for phase A (phases B and C are similar) is   expressed in (12):</font></p>     ]]></body>
<body><![CDATA[<p><img src="/img/revistas/dyna/v76n160/a27eq12.gif"></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">where <i>I<sub>a </sub></i>is the actual value of current in phase A at time <i>t</i> and <sub><img src="/img/revistas/dyna/v76n160/a27eq010.gif"></sub>is the predicted value of current in phase A at time <i>t</i>.</font></p>     <p align="center"><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b><a name="fig06"></a><img src="/img/revistas/dyna/v76n160/a27fig06.gif">    <br>   Figure 6.</b> Residuals phase A under load fluctuation (0 to 30% of   rated torque)</font></p>     <p align="center"><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b><a name="fig07"></a></b><img src="/img/revistas/dyna/v76n160/a27fig07.gif"><b>    <br> Figure 7.</b> Residuals phase B under load fluctuation (0 to 30% of rated torque)</font></p>     <p align="center"><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b><a name="fig08"></a><img src="/img/revistas/dyna/v76n160/a27fig08.gif">    <br>   Figure 8</b>. Residuals phase C under load fluctuation (0 to 30% of rated   torque)</font></p>     <p align="center"><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b><a name="fig09"></a><img src="/img/revistas/dyna/v76n160/a27fig09.gif">    <br>   Figure 9</b>. Residuals phase A under load fluctuation (0 to 20% of rated   torque)</font></p>     ]]></body>
<body><![CDATA[<p align="center"><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b><a name="fig10"></a><img src="/img/revistas/dyna/v76n160/a27fig10.gif">    <br> Figure 10.</b> Residuals phase B under load fluctuation (0 to 20% of rated torque)</font></p>     <p align="center"><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b><a name="fig11"></a><img src="/img/revistas/dyna/v76n160/a27fig11.gif">    <br> Figure 11.</b> Residuals phase C under load fluctuation (0 to 20% of rated torque)</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">As shown in <a href="#fig05">Figures 5</a>-<a href="#fig11">11</a>. The residual magnitudes   change depending on the PMSM&#8217;s load condition. As noted in <a href="#fig07">Figures 7</a>-<a href="#fig11">11</a>,   the NN model turn out the largest residuals when the load condition is going up   from no load to load condition and going down towards no load condition. This   behavior can be attributed to factors such as the time delay generated due to   on line operation and the overshooting produced during the change of the   variables in the model. Additionally, it   can be observed the variation in the magnitude of residuals as a result of the maximum   load change. In summary, the performance of the model developed is affected   slightly by the variation in the load fluctuation.</font></p>     <p>&nbsp;</p>     <p><font size="3" face="Verdana, Arial, Helvetica, sans-serif"><b>4. CONCLUSIONS</b></font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">The NN based approach to model a PMSM under   load fluctuation proposed shows its efficacy in performing current prediction   when the PMSM is running under different conditions of load. It is noted, that   experimentally the load fluctuation condition does not produce any significant   increase in the residuals in each phase studied from no load condition up to   45% of the rated torque. </font></p>     <p>&nbsp;</p>     <p><font size="3" face="Verdana, Arial, Helvetica, sans-serif"><b>REFERENCES</b></font></p>     ]]></body>
<body><![CDATA[<!-- ref --><p><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b>&#91;1&#93;</b> ATIYA, A. F. AND PARLOS, A.G. New results on recurrent network training: Unifying the algorithms and accelerating convergence, IEEE Trans. Neural Networks, vol. 13, 765–786, 2000.     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000123&pid=S0012-7353200900040002700001&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><br>   <b>&#91;2&#93;</b>  LI ZHANG AND YUGENG XI. Nonlinear system identification based on an improved support vector regression estimator. In Advances in Neural Networks, International Symposium on Neural Networks, Dalian, China , 586-591, August 2004.       &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000124&pid=S0012-7353200900040002700002&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><br>   <b>&#91;3&#93;</b>  IOAN, D.L., System Identification and control design, Prentice Hall, 1990.        &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000125&pid=S0012-7353200900040002700003&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><br>   <b>&#91;4&#93;</b>  RAHMAN, M.A., HOQUE, M.A., On-line adaptive artificial neural network based vector control of permanent magnet synchronous motors, IEEE Transaction on Energy Conversion, , vol.13, no.4, 311-318, 1998.       &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000126&pid=S0012-7353200900040002700004&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><br>   <b>&#91;5&#93;</b>  WENBIN, W., XUEDIAN, Z., JIAQUN, X., RENYUAN, T. A feedforward control system of PMSM based on artificial neural network. Proceedings of the Fifth International Conference on Electrical Machines and Systems, Volume 2, 679 - 682, Aug 2001.       &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000127&pid=S0012-7353200900040002700005&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><br>   <b>&#91;6&#93;</b> KUMAR, R., GUPTA, R.A., BANSAL, A.K.. Identification and Control of PMSM Using Artificial Neural Network. IEEE International Symposium on Industrial Electronics, Volume, Issue, 4-7,30 – 35, June 2007.     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000128&pid=S0012-7353200900040002700006&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><br>   <b>&#91;7&#93;</b>  FAYEZ F. M. EL-SOUSY. High-Performance Neural-Network Model-Following Speed Controller for Vector-Controlled PMSM Drive System, IEEE International Conference on Industrial Technology, Tunisia, December 2004.       &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000129&pid=S0012-7353200900040002700007&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><br>   <b>&#91;8&#93;</b>  SIO, K.C.; LEE, C.K., Identification of a nonlinear motor system with neural networks, International Workshop on Advanced Motion Control, vol.1, 287-292, Mar 1996.       &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000130&pid=S0012-7353200900040002700008&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><br>   <b>&#91;9&#93;</b> PARLOS, A.G., RAIS, O., AND ATIYA, A., Multi-Step-Ahead Prediction using Dynamic Recurrent Neural Networks, International Joint Conference on Neural Networks, Vol 1, 349 – 352, July 1999.     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000131&pid=S0012-7353200900040002700009&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><br>   <b>&#91;10&#93;</b>  MOHAMED, F.A.; KOIVO, H., Modeling of induction motor using non-linear neural network system identification, SICE Annual Conference , vol.2, 977-982, Aug. 2004.        &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000132&pid=S0012-7353200900040002700010&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><br>   <b>&#91;11&#93;</b> KIM K. AND PARLOS, A., Induction Motor Fault Diagnosis Based on Neuropredictors and Wavelet Signal Processing”, IEEE/ASME Transactions on Mechatronics, Vol. 7, No 2, 201-219, June 2002.     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000133&pid=S0012-7353200900040002700011&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><br>   <b>&#91;12&#93;</b> NARENDRA,K.S.; PARTHASARATHY, K., Identification and control of dynamical systems using neural networks," IEEE Transactions on Neural Networks, vol.1, no.1, 4-27, Mar 1990.     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000134&pid=S0012-7353200900040002700012&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><br>   <b>&#91;13&#93;</b>  HUSH, D.R. AND HORNE, B.G., Progress in supervised neural networks, Signal Processing Magazine, IEEE , vol.10, no.1, 8-39, Jan 1993.       &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000135&pid=S0012-7353200900040002700013&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><br>   <b>&#91;14&#93;</b>  DE JESUS, O.; HORN, J.M.; HAGAN, M.T., Analysis of recurrent network training and suggestions for improvements, International Joint Conference on Neural Networks, vol.4, 2632-2637, 2001.       &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000136&pid=S0012-7353200900040002700014&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><br>   <b>&#91;15&#93;</b> FORESEE, F.D., AND M.T. HAGAN, Gauss-Newton approximation to Bayesian regularization, Proceedings of the 1997 International Joint Conference on Neural Networks,1930–1935, 1997.     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000137&pid=S0012-7353200900040002700015&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><br>   <b>&#91;16&#93;</b> MACKAY, D.J.C., Bayesian interpolation, Neural Computation, Vol. 4, No. 3, 415–447, 1992.     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000138&pid=S0012-7353200900040002700016&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><br>   <b>&#91;17&#93;</b>  HAGAN, M.T., DEMUTH, H.B. AND BEALE, M.H. Neural Network Design, Boston, MA: PWS Publishing, 1996.       &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000139&pid=S0012-7353200900040002700017&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><br>   <b>&#91;18&#93;</b> NGUYEN, D., AND WIDROW, B. Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights, Proceedings of the International Joint Conference on Neural Networks, Vol. 3, pp. 21–26, 1990. </font>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000140&pid=S0012-7353200900040002700018&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --> ]]></body><back>
<ref-list>
<ref id="B1">
<label>1</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[ATIYA]]></surname>
<given-names><![CDATA[A. F.]]></given-names>
</name>
<name>
<surname><![CDATA[PARLOS]]></surname>
<given-names><![CDATA[A.G.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[New results on recurrent network training: Unifying the algorithms and accelerating convergence]]></article-title>
<source><![CDATA[IEEE Trans. Neural Networks]]></source>
<year>2000</year>
<volume>13</volume>
<page-range>765-786</page-range></nlm-citation>
</ref>
<ref id="B2">
<label>2</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[ZHANG]]></surname>
<given-names><![CDATA[LI]]></given-names>
</name>
<name>
<surname><![CDATA[YUGENG]]></surname>
<given-names><![CDATA[XI]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Nonlinear system identification based on an improved support vector regression estimator]]></article-title>
<source><![CDATA[Advances in Neural Networks]]></source>
<year>Augu</year>
<month>st</month>
<day> 2</day>
<conf-name><![CDATA[ International Symposium on Neural Networks]]></conf-name>
<conf-loc>Dalian </conf-loc>
<page-range>586-591</page-range></nlm-citation>
</ref>
<ref id="B3">
<label>3</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[IOAN]]></surname>
<given-names><![CDATA[D.L.]]></given-names>
</name>
</person-group>
<source><![CDATA[System Identification and control design]]></source>
<year>1990</year>
<publisher-name><![CDATA[Prentice Hall]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B4">
<label>4</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[RAHMAN]]></surname>
<given-names><![CDATA[M.A.]]></given-names>
</name>
<name>
<surname><![CDATA[HOQUE]]></surname>
<given-names><![CDATA[M.A.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[On-line adaptive artificial neural network based vector control of permanent magnet synchronous motors]]></article-title>
<source><![CDATA[IEEE Transaction on Energy Conversion]]></source>
<year>1998</year>
<volume>13</volume>
<numero>4</numero>
<issue>4</issue>
<page-range>311-318</page-range></nlm-citation>
</ref>
<ref id="B5">
<label>5</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[WENBIN]]></surname>
<given-names><![CDATA[W.]]></given-names>
</name>
<name>
<surname><![CDATA[XUEDIAN]]></surname>
<given-names><![CDATA[Z.]]></given-names>
</name>
<name>
<surname><![CDATA[JIAQUN]]></surname>
<given-names><![CDATA[X.]]></given-names>
</name>
<name>
<surname><![CDATA[RENYUAN]]></surname>
<given-names><![CDATA[T.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[A feedforward control system of PMSM based on artificial neural network]]></article-title>
<source><![CDATA[]]></source>
<year>Aug </year>
<month>20</month>
<day>01</day>
<volume>2</volume>
<conf-name><![CDATA[Fifth International Conference on Electrical Machines and Systems]]></conf-name>
<conf-loc> </conf-loc>
<page-range>679 - 682</page-range></nlm-citation>
</ref>
<ref id="B6">
<label>6</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[KUMAR]]></surname>
<given-names><![CDATA[R.]]></given-names>
</name>
<name>
<surname><![CDATA[GUPTA]]></surname>
<given-names><![CDATA[R.A.]]></given-names>
</name>
<name>
<surname><![CDATA[BANSAL]]></surname>
<given-names><![CDATA[A.K.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Identification and Control of PMSM Using Artificial Neural Network]]></article-title>
<source><![CDATA[]]></source>
<year>June</year>
<month> 2</month>
<day>00</day>
<conf-name><![CDATA[ IEEE International Symposium on Industrial Electronics]]></conf-name>
<conf-loc> </conf-loc>
<page-range>30 - 35</page-range></nlm-citation>
</ref>
<ref id="B7">
<label>7</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[FAYEZ]]></surname>
<given-names><![CDATA[F. M.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[EL-SOUSY: High-Performance Neural-Network Model-Following Speed Controller for Vector-Controlled PMSM Drive System]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[ IEEE International Conference on Industrial Technology]]></conf-name>
<conf-date>December 2004</conf-date>
<conf-loc>Tunisia </conf-loc>
</nlm-citation>
</ref>
<ref id="B8">
<label>8</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[SIO]]></surname>
<given-names><![CDATA[K.C.]]></given-names>
</name>
<name>
<surname><![CDATA[LEE]]></surname>
<given-names><![CDATA[C.K.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Identification of a nonlinear motor system with neural networks]]></article-title>
<source><![CDATA[International Workshop on Advanced Motion Control]]></source>
<year>Mar </year>
<month>19</month>
<day>96</day>
<volume>1</volume>
<page-range>287-292</page-range></nlm-citation>
</ref>
<ref id="B9">
<label>9</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[PARLOS]]></surname>
<given-names><![CDATA[A.G.]]></given-names>
</name>
<name>
<surname><![CDATA[RAIS]]></surname>
<given-names><![CDATA[O.]]></given-names>
</name>
<name>
<surname><![CDATA[ATIYA]]></surname>
<given-names><![CDATA[A.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Multi-Step-Ahead Prediction using Dynamic Recurrent Neural Networks]]></article-title>
<source><![CDATA[]]></source>
<year>July</year>
<month> 1</month>
<day>99</day>
<volume>1</volume>
<conf-name><![CDATA[ International Joint Conference on Neural Networks]]></conf-name>
<conf-loc> </conf-loc>
<page-range>349 - 352</page-range></nlm-citation>
</ref>
<ref id="B10">
<label>10</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[MOHAMED]]></surname>
<given-names><![CDATA[F.A.]]></given-names>
</name>
<name>
<surname><![CDATA[KOIVO]]></surname>
<given-names><![CDATA[H.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Modeling of induction motor using non-linear neural network system identification]]></article-title>
<source><![CDATA[]]></source>
<year>Aug.</year>
<month> 2</month>
<day>00</day>
<volume>2</volume>
<conf-name><![CDATA[ SICE Annual Conference]]></conf-name>
<conf-loc> </conf-loc>
<page-range>977-982</page-range></nlm-citation>
</ref>
<ref id="B11">
<label>11</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[KIM]]></surname>
<given-names><![CDATA[K.]]></given-names>
</name>
<name>
<surname><![CDATA[PARLOS]]></surname>
<given-names><![CDATA[A.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Induction Motor Fault Diagnosis Based on Neuropredictors and Wavelet Signal Processing]]></article-title>
<source><![CDATA[IEEE/ASME Transactions on Mechatronics]]></source>
<year>June</year>
<month> 2</month>
<day>00</day>
<volume>7</volume>
<numero>2</numero>
<issue>2</issue>
<page-range>201-219</page-range></nlm-citation>
</ref>
<ref id="B12">
<label>12</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[NARENDRA]]></surname>
<given-names><![CDATA[K.S.]]></given-names>
</name>
<name>
<surname><![CDATA[PARTHASARATHY]]></surname>
<given-names><![CDATA[K.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Identification and control of dynamical systems using neural networks]]></article-title>
<source><![CDATA[IEEE Transactions on Neural Networks]]></source>
<year>Mar </year>
<month>19</month>
<day>90</day>
<volume>1</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>4-27</page-range></nlm-citation>
</ref>
<ref id="B13">
<label>13</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[HUSH]]></surname>
<given-names><![CDATA[D.R.]]></given-names>
</name>
<name>
<surname><![CDATA[HORNE]]></surname>
<given-names><![CDATA[B.G.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Progress in supervised neural networks]]></article-title>
<source><![CDATA[Signal Processing Magazine]]></source>
<year>Jan </year>
<month>19</month>
<day>93</day>
<volume>10</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>8-39</page-range><publisher-name><![CDATA[IEEE]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B14">
<label>14</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[DE JESUS]]></surname>
<given-names><![CDATA[O.]]></given-names>
</name>
<name>
<surname><![CDATA[HORN]]></surname>
<given-names><![CDATA[J.M.]]></given-names>
</name>
<name>
<surname><![CDATA[HAGAN]]></surname>
<given-names><![CDATA[M.T.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Analysis of recurrent network training and suggestions for improvements]]></article-title>
<source><![CDATA[]]></source>
<year>2001</year>
<volume>4</volume>
<conf-name><![CDATA[ International Joint Conference on Neural Networks]]></conf-name>
<conf-loc> </conf-loc>
<page-range>2632-2637</page-range></nlm-citation>
</ref>
<ref id="B15">
<label>15</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[FORESEE]]></surname>
<given-names><![CDATA[F.D.]]></given-names>
</name>
<name>
<surname><![CDATA[HAGAN]]></surname>
<given-names><![CDATA[M.T.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Gauss-Newton approximation to Bayesian regularization]]></article-title>
<source><![CDATA[]]></source>
<year>1997</year>
<conf-name><![CDATA[ International Joint Conference on Neural Networks]]></conf-name>
<conf-loc> </conf-loc>
<page-range>1930-1935</page-range></nlm-citation>
</ref>
<ref id="B16">
<label>16</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[MACKAY]]></surname>
<given-names><![CDATA[D.J.C.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Bayesian interpolation]]></article-title>
<source><![CDATA[Neural Computation]]></source>
<year>1992</year>
<volume>4</volume>
<numero>3</numero>
<issue>3</issue>
<page-range>415-447</page-range></nlm-citation>
</ref>
<ref id="B17">
<label>17</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[HAGAN]]></surname>
<given-names><![CDATA[M.T.]]></given-names>
</name>
<name>
<surname><![CDATA[DEMUTH]]></surname>
<given-names><![CDATA[H.B.]]></given-names>
</name>
<name>
<surname><![CDATA[BEALE]]></surname>
<given-names><![CDATA[M.H.]]></given-names>
</name>
</person-group>
<source><![CDATA[Neural Network Design]]></source>
<year>1996</year>
<publisher-loc><![CDATA[Boston^eMA MA]]></publisher-loc>
<publisher-name><![CDATA[PWS Publishing]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B18">
<label>18</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[NGUYEN]]></surname>
<given-names><![CDATA[D.]]></given-names>
</name>
<name>
<surname><![CDATA[WIDROW]]></surname>
<given-names><![CDATA[B.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights]]></article-title>
<source><![CDATA[]]></source>
<year>1990</year>
<volume>3</volume>
<conf-name><![CDATA[ Proceedings of the International Joint Conference on Neural Networks]]></conf-name>
<conf-loc> </conf-loc>
<page-range>21-26</page-range></nlm-citation>
</ref>
</ref-list>
</back>
</article>
