Ciencias Exactas y Ciencias de la Salud
Permanent URI for this collectionhttps://hdl.handle.net/11285/551014
Pertenecen a esta colección Tesis y Trabajos de grado de los Doctorados correspondientes a las Escuelas de Ingeniería y Ciencias así como a Medicina y Ciencias de la Salud.
Browse
Search Results
- Extracting the embedded knowledge in class visualizations from artificial neural networks for applications in dataset and model compression and combinatorial optimization(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2024-04-25) Abreu Pederzini, Jose Ricardo; Terashima Marín, Hugo; emiggomez, emipsanchez; González Mendoza, Miguel; Juárez Jiménez, Julio Antonio; Rosales Pérez, Alejandro; Bendre, Nihar; School of Engineering and Sciences; Campus Monterrey; Ortiz Bayliss, José CarlosArtificial neural networks are efficient learning algorithms, considered universal approxima-tors for solving numerous real-world problems in areas like computer vision, language processing, or reinforcement learning. To approximate any given function, neural networks train a large number of parameters that can go up to the millions or even billions in some cases. The large number of parameters and hidden layers in neural networks makes them hard to interpret, which is why they are often referred to as black boxes. In the quest to make artificial neural networks interpretable in the field of computer vision, feature visualization stands outas one of the most developed and promising research directions. While feature visualizations are a useful tool to gain insights about the underlying function learned by a neural network, they are still considered simply as visual aids that require human interpretation. In this doctoral work, we propose that feature visualizations—class visualizations in particular—are analogous to mental imagery in humans and contain the knowledge that the model extracted from the training data. Therefore, when correctly generated, class visualiza-tions can be considered as a conceptual compression of the data used to train the underlying model, resembling the experience of perceiving the actual training samples just as mental imagery resembles the real experience of perceiving the actual physical event. We present results showing that class visualizations can be considered a conceptual compression of the training data used to train the underlying model and present a methodology that enables the use of class visualizations as training data. To achieve this goal, we show that class visualizations can be used as training data to develop new models from scratch, achieving, in some cases, the same accuracy as the underlying model. Additionally, we explore the nature of class visualizations through different experiments to gain insights on what exactly class visualizations represent and what knowledge is embedded in them. To do so, we com- pare class visualizations to the class average image from the training data and demonstrate how the other classes that a model is trained on affect the shape and the knowledge embedded in a class visualization. We show that class visualizations are equivalent to visualizing the weight matrices of the output neurons in shallow network architectures and demonstrate that class visualizations can be used as pretrained convolutional filters. We experimentally show the potential of class visualizations for extreme model compression purposes. Finally, we present a novel methodology to enable the use of Artificial Neural Networks along with class visualizations for the solution of combinatorial optimization problems, such as the 2D Bin Packing Problem, by training an Artificial Neural Network to score potential solutions to a 2D BPP and then using that network to generate an ’optimal’ (local optima) solution to the problem by extracting a class visualization from the network via backpropagation to the network’s input. Even though we show the use of class visualizations as a tool to solve the bin packing problem, it is important to note that class visualizations have the potential to be used in the same way to solve other types of combinatorial optimization problems. For other types of combinatorial optimization problems, we just need to design a neural network that is capable of scoring solutions to the particular combinatorial optimization problem and extract class visualizations from such a network to generate a candidate solution to the problem.
- A methodology for prediction interval adjustment for short term load forecasting(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2020-12) Zúñiga García, Miguel Ángel; Batres Prieto, Rafael; hermlugo; Santamaría Bonfil, Guillermo (Co-advisor); Noguel Monroy, Juana Julieta; Ceballos Cancino, Héctor Gibrán; School of Engineering and Sciences; Campus Estado de México; Arroyo Figueroa, GustavoElectricity load forecasting is an essential tool for the effective power grid operation and for energy markets. However, the lack of accuracy on the estimation of the electricity demand may cause an excessive or insufficient supply which can produce instabilities in the power grid or cause load cuts. Hence, probabilistic load forecasting methods have become more relevant since these allow to understand, not only load point forecasts but also the uncertainty associated with it. In this thesis, a framework to generate prediction models that generate prediction intervals is proposed. This framework is designed to create a probabilistic STLF model by completing a series of tasks. First, prediction models will be generated using a prediction method and a segmented time series dataset. Next, prediction models will be used produce point forecast estimations and errors will be registered for each subset. At the same time, an association rules analysis will be performed in the same segmented time series dataset to model cycling patterns. Then, with the registered errors and the information obtained by the association rules analysis, the prediction intervals are created. Finally, the performance of the prediction intervals is measured by using specific error metrics. This methodology is tested in two datasets: Mexico and Electric Reliability Council of Texas (ERCOT). Best results for Mexico dataset are a Prediction Interval Coverage Probability (PICP) of 96.49% and Prediction Interval Normalized Average Width 12.86, and for the ERCOT dataset a PICP of 94.93% and a PINAW of 3.6. These results were measured after a reduction of 14.75% and 5.25% in the prediction intervals normalized average width of the Mexico and ERCOT dataset respectively. Reduction of the prediction interval is important because it can helps in reducing the amount of electricity purchase, and reducing the electricity purchase even in 1% represents a large amount of money. The main contributions of this work are: a framework that can convert any point forecast model in a probabilistic model, the Max Lift rule method for selection of high quality rules, and the metrics probabilistic Mean Absolute Error and Root Mean Squared Error.
- Implementation and comparison of prediction models in Periodic Disturbance Micromixer (PDM)(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2021-12-18) Ocampo Silva, Ixchel; Camacho León, Sergio; puemcuervo; Hernandéz Hernandéz, José Ascención; Sustaita Narváez, Alan Osiris; School of Engineering and Sciences; Campus Monterrey; Nerguizian, VahéIn recent years, the use of micromixers to produce liposomes has increased in the research field. They are an economical alternative, helping reactants waste, and allowing control of liposomes size. However, micromixer technology is not still viable for the industry. Some reasons are: a low production rate, no protocol existing to know the operating parameters for liposome size, and existing prediction models for liposome size do not have the desired accuracy. This dissertation focused on implementing and comparing different prediction models used in Periodic Disturbance Micromixer (PDM). Three models are focused on predicting liposome size with two operating parameters. All the models were implemented in MATLAB and compared through correlation coefficient (R). They were experimentally validated and subsequently compared with data analysis (DA) models. This work concluded that artificial intelligence (AI) techniques to predict operating parameters and liposome size show a significant improvement in correlation coefficients compared to the ones obtained by DA-based models
- An integral approach for the synthesis of optimum operating procedures of thermal power plants towards better operational flexibility(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2020-06-11) Rosado Tamariz, Erik; GANEM CORVERA, RICARDO; 60681; Batres Prieto, Rafael; emipsanchez; Ganem Corvera, Ricardo; Genco, Filippo; School of engineering and sciences; Campus Ciudad de MéxicoTo deal with the challenge of a balance between the large-scale introduction of variable renewable energies and intermittent energy demand scenarios in the current electrical systems, operational flexibility plays a key role. The electrical system operational flexibility can be addressed from different areas such as power generation, transmission and distribution systems, energy storage (both electrical and thermal), demand management, and coupling sectors. Regarding power generation, specifically at the power plant level, operational flexibility can be managed through the cyclic operation of conventional power plants which involve load fluctuations, modifications in ramp rates, and frequents startup and shutdowns. Since conventional power plants were not designed to operate under cyclic operating schemes with involve fast response times, must develop these capabilities through the design of operating procedures that minimize the time needed to take the power plant from an initial state to the goal state without compromising the structural integrity of critical plant components. This thesis proposes a dynamic optimization methodology to the synthesis of optimum operating procedures of thermal power plants which determine the optimal control valves sequences that minimize its operating times based on techniques of dynamic simulation, metaheuristic optimization, and surrogate modeling. Based on such an approach, the power plants must be increasing its operational flexibility to address a large-scale introduction of variable renewable energies and intermittent energy demand scenarios. This thesis proposes a dynamic optimization framework based on the implementation of a metaheuristic optimization algorithm coupled with a dynamic simulation model, using the modeling and simulation environment OpenModelica and a surrogate model to estimate in a computationally efficient way the structural integrity constraint of the dynamic optimization problem. Two case studies are used to evaluate the proposed framework by comparing their results against information published in the literature. The first case study focuses on managing the thermal power plant's flexible operation based on the synthesis of the startup operating procedure of a drum boiler. The second case study addresses the synthesis of an optimum operating strategy of a combined heat and power system to improve the electric power system’s operational flexibility.