versão impressa ISSN 0012-7353
Dyna rev.fac.nac.minas v.77 n.163 Medellín jul./set. 2010
ANALYSIS AND REVIEW OF THE CONTRIBUTION OF NEURAL NETWORKS TO SAVING ELECTRICITY IN RESIDENTIAL LIGHTING BY A DESIGN IN MATLAB
ANÁLISIS Y ESTUDIO DE LA CONTRIBUCIÓN DE LAS REDES NEURONALES AL AHORRO DE ENERGÍA ELÉCTRICA EN ILUMINACIÓN RESIDENCIAL MEDIANTE UN DISEÑO EN MATLAB
Technician in Electricity, Distrital Francisco.Jose de Caldas. University, Electrical Engineering Student, firstname.lastname@example.org
Technician in Electricity, Distrital Francisco. Jose de Caldas. University, Electrical Engineering Student, email@example.com
Electronic Engineer, Distrital Francisco. Jose de.Caldas. University, Professor, firstname.lastname@example.org
Received for review February 10th, 2010, accepted June 16th, 2010, final version June, 18th, 2010
ABSTRACT: In this document is presented the implementation of the programming schedules as a method of lighting control, to perform a total saving and a personalized saving using neural networks. With the acquisition of a series of data about the operation of five lightings located in different parts of a specific house, it was designed a neural network to illuminate it and was implemented this design to the remaining. These neural networks were trained with input vectors; hour of the day, day of the week, holiday Monday's and their respective objective vectors "total saving and personalized saving" with the purpose of evaluating the performance of the neural networks in the optimization of methods for saving electric energy in residential lighting.
KEYWORDS: Neural network, time schedules, total saving, personal savings, lighting control.
RESUMEN: En este documento se presenta la implementación de la programación horaria como método de control de iluminación, para realizar un ahorro total y un ahorro personalizado, utilizando redes neuronales. Con la adquisición de una serie de datos, sobre el funcionamiento de 5 luminarias ubicadas en diferentes partes de una casa específica, se diseñó una red neuronal para una luminaria y se implementó este diseño para las restantes. Estas redes neuronales fueron entrenadas con los vectores de entrada; hora al día, día a la semana, lunes festivos y sus respectivos vectores objetivo "ahorro total y ahorro personalizado" Con el fin de evaluar el desempeño de las redes neuronales en la optimización de métodos para el ahorro de energía eléctrica en iluminación residencial.
PALABRAS CLAVE: Red neuronal, programación horaria, ahorro total, ahorro personalizado, control de iluminación.
Nowadays is very common to see the increase of the cost in the electric energy service; which makes people to be concerned about energy consumption in their houses, provoked by multiple electric devices present in these places. A large part of this consumption is owed to basic necessities in the house as illumination and food refrigeration.
Taking in account the actual saving necessities, it is necessary to consider some extra expenditure that is generated as consequence of an inadequate use of electric energy, among, and the most frequent is to keep on devices that are not being used in certain moments, as for example, the most common is keeping turned on lamps or light bulbs.
Therefore, arises the necessity to optimize, develop new systems and/or methods that allow to make electric energy savings in homes, by illumination control. Today there are various methods to control illumination, among which we find the scheduling programming; where we can program the turning off control, turning on and regulation of the illumination according to the time of the day, and the day of the week. .
In this document is tried to evaluate the performance of neural networks, in the optimization of the work of 5 lights in a specific house upon the definition of a schedule programming method. It is implemented this method given that the neural networks are intelligent models that search to reproduce the behavior of the brain, with the capacity of being an adaptable system to any application.
2. DESIGN AND METHODOLOGY
To perform the design of a neural network that allows predicting the working of lighting, many factors have to be taken in consideration, as:
- Objective Vectors and Entry Vectors
- Data acquisition
- Structure of the neural network
2.1 Objective Vectors and Entry Vectors
The selection of the objective vectors and the entry vectors is the first step for the design of a neural network with application in the prediction of data.
2.1.1 Objective Vectors
The objective of this work is to evaluate the performance of the neural networks in the optimization of the method of schedule programming for energy saving, through a total saving and a personalized saving, which arises the necessity to leave for each light two objective vectors. In table 1 is realized a description of the numeric values that were assigned according to the characteristics of the variable:
Total saving: Is made by 2 factors, turning on or off the light, facilitating the assignation of binary values, representing the " 0" as off and the " 1" as on.
Personalized saving: Is conformed by 5 regulation factors desired in the illumination of the house (0%,25%,50%,75% y 100%), and therefore represented in decimal values from 0 to 1 with steps of 0.25 (equation 1 was used to assign the decimal values to the regulation percentages)
Where: Y is the numeric value of the desired regulation and n can take the values from 1 to 100 at intervals of 25.
2.1.2 Entry Vectors
It is indispensable to establish entry factors that allow generating patterns related to the exit values in the training of the neural network, so it can determine the consumption of electrical energy of the lighting in the houses.
On other hand, the non predetermined conditions of the environment have to be taken in consideration, as: alternation of the fixed situations or modification of the established times, such changes can be defined as non predictable factors and require a filtration process to avoid the sensible alteration of the model.
Factors that affect the exits of the network
In the method of Schedule programming is used the hour of the day and the day of the week to determine the performance behavior of the lightings. Therefore the selection of the factors; hour, day and holiday Monday, are defined as the entry data in the simulation of the neural network. Following is explained the selected variables.
a. Hour of the day: This is one of the variables with more influence in the use of lightings in homes, due that it is evident that according to the hour of the day people have the necessity to turn on or off the lights. To introduce this factor and trying to give more precision in the functioning times of the lights, it is used a numerical variable as an entry vector for the neural network, the range will oscillate between 0 and 23.75 with ranges of 0.25, that represent intervals of 15 minutes (Equation 2).
Where; Y is equal to the numeric value of the hour of the day and n= can take values from 0 to 60 in intervals of 15 minutes equivalent to ¼ of an hour.
In table 2 is shown an example of the numeric value that is used to represent the 15 minute intervals.
b. Day of the week: The day of the week affects the performance of the lightings, a very clear example is comparing a Sunday with a Tuesday, given the fact that the Sunday is a non working day changes the routine behavior of people changing the use of appliances and lights. This factor will be codified as shown in Table 3.
c. Holliday Monday's: it is evidenced that there is a great difference between a working Monday and a holiday Monday, as mentioned before it affects the functioning habits of the lights. This variable will be included in the entry vector with a binary numeric value, represented " 1" as holiday Monday and " 0" as working Monday.
Finally is designed the vector schematics for entries and objectives, as shown in figure 1.
2.2 Data acquisition
Once determined the entry values and the objective vectors, the data gathering is initiated on the working of the 5 lightings, located in different parts of a specific house during 11 weeks, the procedure to capture this information is made upon a study of the schedule established by the routines of the people, also for this work was given the collaboration of the members of the house. In Table 4 is shown an example of how the data was gathered. In total was gathered 295.680 data of which about 20% is left as validation and the remaining 80% is taken to train the network as shown in Table 5.
2.3 Structure of the neural network
Upon this point it is started the design of the neural network through Matlab's toolbox Neural Network (nntool); this simulation tool allows the selection of: net type, training function, number of layers, number of neurons and transfer function by layer, taking in account the entry vectors and objective vectors, in Figure 1 is presented the general schematics of the neural network.
2.3.1 Type of neural net
For the selection of the type of network were considered the following parameters: type of training, desired objectives, number of hidden layers, processing capacity used to train the network, this last aspect can cause time optimization problems in the iterance's and can block the equipment to be used with the following characteristics: Core 2 duo CPU T5550 1.8GHz with 2GB RAM and pagination file of 2.046 GB.
After making a comparative chart between the 3 different types of neural networks used for the prediction as shown in Table 6, it is evident the implementation of the Feed-Forward Backprop net. This type of network is one of the most used in the prediction of patterns today, because when it emits a result, it is compared with the desired exit and calculates the error made. Upon this moment the exit layer makes an inverse propagation of the error towards the hidden layers, recalculating the weights so the error is minimized in the next iteration. As the net is trained, the neurons of the intermediate layer learn to recognize the characteristics of the entries (entry patterns)   .
2.3.2 Red Feed -Forward Backprop parameters
By selecting the Feed-Forward Backprop network, the toolbox Neural Network provides a series of (algorithms - parameters) that vary according to the application that wants to be given to de neural network, the parameters are:
- Input ranges
- Training function
- Adaption learning function
- Performance function
- Number of layers
This is observed in the toolbox Neural Network as shown in figure 2. Following is explained the repercussion of the previous items in the design of the network.
a. Entry ranges
This item in fact is not a parameter, but a box that allows charging the entry vector to obtain the numeric values upon it is made.
b. Training function
This parameter allows choosing the type of training algorithm used by the neural network, according to the research made for the prediction and application to practical problems it is recommended the use of 4 training functions. 
- Trainlm: (defined by defect) requires storage capacity, converges in a high number of iterations, algorithm that updates the weights and gains according to the optimization of Levenberg-Marquardt
- Trainbfg: this algorithm requires more storage capacity tan the traditional algorithm, but generally converges in less iterations; is an alternative algorithm that employs the conjuxed gradient technique, its mathematical expression is derived by the Newton method,
- Trainscg: Requires less storage capacity, converges in less iterations, training of retro propagation of escalated conjuxed gradient.
- Traingdx: Requires less storage capacity, converges in a high number of iterations, training of retro propagation of descending gradient with momentum and adaptative learning rate.
For the design of the neural network were made a series of tests that gave the better training function, the better tests can be found on Table 8.
c. Learning adaptation
The toolbox of Neural Network allows performing the 2 parameter variation
- Learngd: Learning applying descending gradient 
- Learngdm: Learning applying descending gradient with momentum .
Because it can not be established which one of both gives the better adaptation, tests were made to verify the most optimum one, as shown in Table 8.
d. Performance function
As its name indicates, this function allows observing the performance of the neural network in its training, providing the error in it.
Matlab gives the option of 3 functions of execution.
- MSE: Middle quadratic error of the performance function.
- MSEREG: Middle quadratic error with the performance of the regularization function.
- SSE: Sum performance function quadratic error.
According to the preceding explanation, the 3 functions are used to measure the training error; because of this was selected the parameter already established by Matlab's toolbox Neural Network "MSE". 
e. Number of layers and properties of the layers
For a better design of the neural network, Matlab allows to modify:
. Number of layers
For the selection of the number of layers were made error tests with 1, 2, 3 and 4 layers; the best results are shown in Table 8. Upon the gathered data is analyzed that the number of layers that has a better behavior with respect to performance is of 3.
. Number of neurons
As for the number of layers this characteristic is obtained upon tests, the main results are shown in Table 8, in which can be observed that the number of neurons for each layer that adapts better to the training of the net is [20-30-2]. It is worth mentioning that the objective of the net is to predict two exits (total savings and personalized savings); because of that the number of exit neurons of the network will be 2.
. Transfer function by layer
Matlab counts with 3 functions per layer, these functions are described in Table 7 for the selection of the transference function has to be performed a series of tests that allow to determine the best model. The exit of the last layer was designed under the criteria of a PURELIN transfer function, due that this linear transference function provides the desired exits for the intermediate values between 0 and 1 for the personalized saving (Figure 3), for the remaining layers the best result was:
First Layer: Tansig
Second Layer: Tansig
Third Layer: Purelin
3.1 Neural network structure
According to the objective of the project a simulation of a neural network for the 5 lightings with their respective exit vectors was tried (Total saving and personalized saving), but due to the amount of memory required to perform this simulation it was not possible its development. For that reason the decision to make a neural network for each lighting was taken obtaining a total of 5 networks. For the selection of parameters the tests were made with only one light, given that the structure of the data is similar.
With the help of Matlab's toolbox was made a total of 281 simulations; with the goal of obtaining the configuration with the lowest performance (highest adaptation).
In Table 9 are observed the obtained results in the 281 simulations made for the light located in room number 3, where can be appreciated the 3 best results of the four training algorithms with the parameter variation of the Feed-Forward Backprop network that finally provide the configuration most adaptable to the data.
In Figure 4 is he final model obtained. The structure is composed by 3 layers, the first contains 20 neurons with a Tansing transference function, the second layer composed of 30 neurons and with a transference function equal to the first one, the exit layer with a Purelin transference function, it is clear that the third layer was designed upon the objective vectors.
3.2 Simulation result
In Table 9 can be found the performance given by Matlab (middle quadratic error of the function) for each light, as the averages of the absolute errors for each objective value Performance is, Goal 0. .
In figure 5 is the training made by the neural network for each one of the lights, where can be observed that the graphics start with a high error and goes diminishing until it stabilizes, with the training it is appreciated the validation made by Matlab that allows to minimize the number of iterations, the value of the performance for each network is found in Table 9.
Figure 5(a): training for lighting located in the kitchen at 101 iterations.
Figure 5(b): Training for lighting located in the dinning room at 117 iterations.
Figure 5(c): Training for the lighting located in the study room at 29 iterations.
Figure 5(d): Training for the lighting located in room 1 at 31 iterations.
Figure 5(e): Training for the lighting located in room 3 at 134 iterations.
3.3 Analysis of the data obtained
In Figure 6 can be observed the percentage participation graphics on the On and Off states in the total of the data gathered for each lighting in a period of 11 weeks. It must be noted that the On state is made of the 25%, 50%, 75% and 100% regulations and the Off state by 0%.  .
Figure 6(a): Contribution of the states in the total of data gathered for the lighting in the kitchen.
Figure 6(b): Contribution of the states in the total of data gathered for the lighting in the dinning room.
Figure 6(c): Contribution of the states in the total of the data gathered in the lighting of the study room.
Figure 6(d): Contribution of the states in the total of the data gathered in the lighting of room 1.
Figure 6(e): Contribution of the states in the total of the data gathered in the lighting of room 3.
3.4 Result analysis for total saving
Upon Figure 6 and the exits of the simulations for total savings, it was observed that exists a great variety of data that is in intermediate points of a range between " 0" and " 1" taking in consideration that the 2 states that make the total saving are given by On (1) and Off (0), is implemented a logical condition that takes to Zero the close values to it and the same way it takes to " 1" the values close to one. To find the optimal logic value it was necessary to develop a series of tests, calculating the cost of energy and user satisfaction for 5 different conditions or comparison limit as shown in Table 11.
To obtain the vinculating error directly with insatisfaction , it is determined a factor that involves the user preference with respect to the performance of the network. This factor is fixed through an interview made to the members of the house where the data was gathered, as shown in Table 10. In this Table is found the preference by user, giving a percentage average value by state, according to this it is preferred in 75% that the network turns off the lights but not that it turns them on.
Where: EabsOn and EabsOff are the average errors of the difference between the objective vector and the exit of the network.
In Table 12 is found the final cost for lighting that was generated by using the neural networks in the exit of the total saving, also is found the real price for each lighting that was calculated with the data gathered. The total cost of energy, real as the one provided by the neural network is found on Table 13.
By last for the analysis of the data of the total saving, in Table 14 is appreciated the costumer insatisfaction that is obtained by multiplying the absolute error by the average of the client preferences with respect to the On and Off states of the lightings found on Table 11.
3.5 Resulting analysis for the personalized saving
For the personalized saving as for the total saving, it was defined a 0.2 limit "20%" , but different than the total saving, the limit was determined to filtrate the noise produced by the network, which provided a considerable consumption. It was also established a limit above 100%, due that it is not possible that the lighting acquires a higher value. Because of this that data that exceeded were approximated to such objective.
In Table 15 is found the final cost per lighting applied to the personalized saving.
The total cost of energy of the 5 lightings for regulation, real and given by the neural network is found on Table 16.
Table 17 is composed by 5 columns, where costumer insatisfaction can be found, which was determined by multiplying the absolute error by the average percentages of preference of the costumer in each seen state from Table 10. Finally it is appreciated the costumer satisfaction data applied to the personalized saving.
3.6 Savings methods Comparison
In Table 18 is made a comparison of the criteria "total saving and personalized saving" of the scheduling programming as saving strategies of electric energy in illumination.
Due it was not possible to perform the simulation for all the lights it was designed a neural network for one lighting by a series of test / errors, being implemented for the remaining four lightings and obtaining an average performance of 0,0303577.
It was implemented a scheduling programming method using neural networks obtaining a satisfaction in total savings of 87.2% and in personalized saving of 88.5%. Which generated in the time period established energy costs for total savings of $57.886,3 and for personalized savings of $53.142,2.
By comparing the criteria of total saving and personalized saving with respect to the network exits it is determined that in the total savings was a costumer satisfaction of 87.2%, in comparison to the personalized saving which was 88.5%, giving a higher satisfaction in the personalized saving of 1.3%.
In the same way it is realized the comparison in costs of energy consumed with respect to the real values, obtaining a saving of $13.503.9 for total saving and $8.009.3 for personalized saving, obtaining a higher economic benefit in the personalized saving of $4.744,1.
In the exits of the neural network developed are presented unnecessary costs in the moments in which the light should be turned Off. For the total saving is presented and expenditure of 17.138,3678 in the Off state and for the personalized saving of 19.779,24 when the regulation should be zero per cent.
Even though the data of the exit of the neural network is saving, it is important to take in consideration that the neural network is not providing the required consumption to supply the basic needs of residential lighting.
 Casa domo. "El portal del hogar digital" [Online]. [ref. August 4 of 2008]. Available: http://www.casadomo.com/noticiasDetalle.aspx?c=145&idm=157&m=21&n2=20&pat=20. [ Links ]
 B. MARTIN DEL BRIO, A. SAENZ MOLINA, Redes Neuronales y Sistemas Difusos. Alfaomega. Mexico , 2002, pp. XXI, 3,69. [ Links ]
 P. L. GALINDO. "Redes multicapa Algoritmo BackPropagation". [Online]. 1999. [ref. December 5 of 2008]. Available: http://www2.uca.es/dept/leng_sist_informaticos/preal/23041/transpas/EBackpropagation/ppframe.htm Slide 6. [ Links ]
 Chapter 4: Redes Neuronales en la predicción. [Online]. [ref. February 3 of 2009]. Available: http://catarina.udlap.mx/u_dl_a/tales/documentos/msp/aguilar_d_ra/capitulo4.pdf. [ Links ]
 "Mecanismos de Adaptación de parámetros". [Online]. [ref. March 18 of 2009]. Available: http://omarsanchez.net/adaptparam.Aspx. [ Links ]
 M. C. RÍOS, N. C. HERNÁNDEZ, M. M. CHANCHAN. "Evaluación de los diferentes algoritmos de entrenamiento de redes neuronales artificiales para el problema de clasificación vehicular. [Online]. Mexico. [ref. June 23 of 2009]. Available: http://yalma.fime.uanl.mx/~pisis/Verano/2006/talk-norma.pdf. [ Links ]
 Neural Network ToolboxT 6. [Online]. [ref. March 6 of 2008]. Available: http://www.mathworks.com/access/helpdesk/help/pdf_doc/nnet/nnet.pdf. [ Links ]
 I. Richardson, M. Thomson, D. Infield, A. Delahunty, "Domestic lighting: A high-resolution energy demand model". [Online]. Inglaterra. 2008. [ref. March 23 of 2009]. Available: http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V2V-4VSB18H-5&_user=10&_coverDate=07%2F31%2F2009&_alid=1373133729&_rdoc=12&_fmt=high&_orig=search&_cdi=5712&_sort=r&_docanchor=&view=c&_ct=4973&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=bcba6dbfbfd6ef242fd2a8348e7c53ef [ Links ]
 H.W. Li , K.L. Cheung, S.L. Wong, N.T. Lam. "An analysis of energy-efficient light fittings and lighting controls". [Online]. China, Research Group, City University of Hong Kong, Tat Chee Avenue. 2009 [ref. August 23 of 2009]. Available: http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V1T-4WW16YF-1&_user=10&_coverDate=02%2F28%2F2010&_alid=1373151754&_rdoc=24&_fmt=high&_orig=search&_cdi=5683&_sort=r&_docanchor=&view=c&_ct=4973&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=e84d9bb9acdf97cbd15714b13cc27301. [ Links ]