Doctorado
Permanent URI for this communityhttps://hdl.handle.net/11285/551013
Colección de Tesis presentadas por alumnos para obtener un Doctorado del Tecnológico de Monterrey.
Browse
- 3cv+2 : Modelo de Calidad para la Construcción de Vivienda en México(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2008-01-05) Solís Flores, Juan Pablo; SOLIS FLORES, JUAN PABLO; 244164; García Rodríguez, Salvador
- 3D Computer vision for online activity detection. Case study: metabolic rate estimation for connected thermostat(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2022-12-01) Mata Juárez, Omar; PONCE CRUZ, PEDRO; 31857; Ponce Cruz, Pedro; puemcuervo, emipsanchez; Peffer, Therese; McDaniel, Troy; López Caudana, Edgar Omar; School of Engineering and Sciences; Campus Monterrey; Molina Gutiérrez, ArturoThe ability to detect human activities in computer vision has gained importance over the years due to its potential in many applications such crime prevention, healthcare, public safety, human-computer/robot interaction, smart homes, videogames, monitoring, etc. A way to achieve those applications is by doing a Human Activity Recognition (HAR) process in which an activity is identified by a series of physical actions that construct one physical activity. The identification requires sensors to obtain the data for processing and classifying it. These kinds of sensors are often found inside a smart home. Therefore, it is proposed to use noninvasive sensors in combination with digital signal processing to develop a platform for detecting human activity. Moreover, a case study is proposed for validating the platform by proposing a strategy to save energy on HVAC systems without affecting the thermal comfort of the occupant
- 3D printed organ-on-chip device and neural tissue engineering of spheroid and organoid cultures(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2022-06-14) Choy Buentello, David; CHOY BUENTELLO, DAVID; 484426; Alvarez Hernández, Mario Moisés; emijzarate; Broersen, Kerensa; Gonzalez meljem, Jose Mario; Caraza Camacho, Ricardo; Escuela de Ingeniería y Ciencias; Campus Monterrey; Trujillo de Santiago, GrisselWith the increased relevance of 3D culture techniques, such as spheroids and organoids, additional equipment and techniques are needed to accommodate growing tissue and measure physiological functions. Organ-on-Chip devices have increasingly been employed thanks to their perfusion and gradient capabilities, aiding in controlling the microenvironment surrounding the growing culture. Here, we describe an easy-to-use 3D printed neuron-on-chip device able to create sustainable diffusion gradients in a large hydrogel chamber. The device consists of a 3x3x11mm3 culture area, flanked by two parallel media channels. The nutrient/waste exchange created by the diffusion of the two media channels creates a microenvironment suitable for different cell types, including: cancer, embryonic, and fibroblasts. The drug delivery capabilities of this device seem to enhance pharmacokinetics, with an increased effect of perfused substances. Experiments with perfused paclitaxel showed a decrease in mobility and viability of the spheroids only achieved in controls with a 50% higher concentration of the drug. Taking advantage of the perfusion capabilities of the two media channels, a dual gradient can be formed within the hydrogel creating a complex microenvironment within the culture chamber. This versatile, proof-of-concept device holds great promise by enabling a wide range of experiments with 3D cultures. Additionally, we describe a cost-effective method of creating embryoid bodies (EB) from human embryonic stem cells (ESC). Our method combines covering the concave bottom well-plates with a commercially available anti-adherence solution and force-aggregating the cells through centrifugation. This method can be performed minutes before EB formation, instead of days of plate preparation using other ad hoc methods. More importantly, the use of this method, with either U-bottom or V-bottom well plate, provides reproducible EB’s with low variability in diameters and differentiation iwhen comparable to the commercially available plates. Lastly, we further differentiate our EBs into hippocampal organoids to develop a physiologically relevant model for neurodegenerative diseases, such as Alzheimer’s. Our organoids express markers for all the hippocampal regions including the HuB for CA1-CA3 and the PROX1 marker for the dentate gyrus. Voltage sensitive dyes allowed for a minimally invasive method of studying the electrophysiological activity of the neurons, which revealed mature synchronization by day in vitro (DIV) 60. Exposure to Amyloid-β (Aβ) showed a direct correlation between concentration and neural damage, demonstrating the potential for future disease modeling. Overall, our organoid differentiation suggest a fully formed hippocampal organoid able to assist in uncovering hippocampal developmental data and disease model.
- 3D printed tumor on chips for the culture and maturation of heterotypic cancer spheroids as a platform for drug testing(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2022-12-05) Gallegos Martínez, Salvador; GALLEGOS MARTINEZ, SALVADOR; 814051; puemcuervo, emipsanchez; Shrike Zhang, Yu; Gonzáles Meljem, José Mario; Luna Aguirre, Claudia Maribel; Olvera Posadas, Daniel; School of Engineering and Sciences; Campus Monterrey; Trujillo de Santiago, GrisselThe recapitulation of cancer environment in tumor-on-chip systems will greatly contribute to accelerate cancer studies in the fronts on fundamental research, pharmacological testing of new therapeutic compounds, and personalized medicine. Here we describe the development of two microfluidic platforms aimed to contribute to the advance of tumor-on-chip research. First, we describe a simple and robust method for the fabrication, maturation, and extended culture of large heterotypic cancer (MCF7 and MCF7/fibroblasts) spheroids (~900 µm in diameter) in a 3D-printed mini continuous stirred tank reactor (mini-CSTR). In brief, MCF7 and MCF7/BJ cell suspensions (5×104 cells) were incubated in batch culture to form discoid cell aggregates (600 µm in diameter). These microtissues were then transferred into the mini-CSTR and continuously fed with culture media for an extended time (~30 days). The spheroids progressively increased in size during the first 20 days of perfusion culture to reach a steady diameter. We characterized the spheroid morphology, architecture and the evolution of expression of relevant tumor-related genes (i.e., ER, VEGF, Ki67, Bcl2, LDHA, and HIF-1α) in spheroids cultured for 30 days. This mini-CSTR culture strategy enables the simple and reproducible fabrication of relatively large spheroids and offers great potential for studying the effects of diverse effectors on tumor progression. In addition, we introduce a 3D-printed microfluidic system that can be generically used to culture tumor-tissues under well-controlled environments. The system is composed by three compartments. The left and right compartments have two inlets and two outlets which provide means to continuously feed liquid streams to the system. The central compartment is designed to host a hydrogel where a microtissue can be confined and cultured. A transparent lid can be adapted to enable visual inspection under a microscope. We conducted fluorescent and FITC dextran tracer experiments to characterize the hydraulic performance of the system. In addition, we cultured MCF7 and MCF7/BJ spheroids embedded in a GelMA hydrogel constructs (placed in the central chamber), to illustrate the use of this system to sustain long term micro-tissue culture experiments. We also present experimental results that illustrate the flexibility and robustness of this 3D-printed device for tumor-on-chip experiments including pharmacological testing of anticancer compounds. These “open-source” organ-on-chip systems are intended to be a general-purpose resource to facilitate and democratize the development of tumor-on-chip applications. We also explored the use of these cell aggregates and some of the characterization techniques to develop educational activities in the context of tissue engineering. Students fabricated a DIY (do it yourself) incubator and cultured spheroids for 7 days on average. They evaluated glucose consumption, size progression and change in color of the culture media. In this proposed activity students were exposed to concepts and basic experimental duties commonly use in a tissue engineering lab.
- A causal multiagent system approach for automating processes in intelligent organizations(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2010-12-01) Ceballos Cancino, Héctor Gibrán; Ceballos Cancino, Héctor Gibrán; 223871The current competitive environment motivated Knowledge Management (KM) theorists to propose the notion of Organizational Intelligence (OI) for enabling a rapid response of large organizations to changing conditions. KM practitioners consider that OI resi
- A classifier-based fusion algorithm for latent fingerprint identification based on a neural network(Instituto Tecnológico y de Estudios Superiores de Monterrey) Valdes Ramirez, Danilo; VALDES RAMIREZ, DANILO; 862810; Medina Pérez, Miguel Ángel; emipsanchez; Gutiérrez Rodríguez, Andrés; Morales Moreno, Aythami; Loyola González, Octavio; García Borroto, Milton; Gutiérrez Rodríguez, Andrés E.; Escuela de Ingeniería y Ciencias; Campus Estado de México; Monroy, RaúlHuman beings present a singular skin on the surface of their fingers with furrows, ridges, and sweet pores. Ridges and furrows describe distinct forms, such as points with maximum curvature, bifurcation and interruption, and particular ridges’ contours. Experts have classified those forms as level 1, 2, or 3 features. When the finger surface touches an object, it prints the features in a 2D image termed fingerprint due to the grease and the sweat released by the pores. Hitherto, there is no report of two fingerprints having the same features, not even in identical tweens. Fingerprints acquired from the object surfaces with unknown identity are latent fingerprints. Latent fingerprints have practical applications in criminal investigations and justice administration. Such sensible applications demand high accuracy and speed in the identification of a latent fingerprint. Although some authors have proposed latent fingerprint identification algorithms, they still consider insufficient the achieved identification rates for satisfying the sensitivity of the latent fingerprint applications. Some enhancements to the identification of latent fingerprints have been reported with fusion algorithms. However, the typical fusion scheme has been focused on weighted sums with weights empirically determined. The trending use of weighted sums has left room for improvements using classifier-based fusions. Additionally, the literature related to the latent fingerprint identification lacks of an exhaustive analysis of the suitability of the multiple fingerprint feature representations proposed, and the quantification of such a suitability. In this research, we analyze the appropriateness of several fingerprint feature representations for representing latent fingerprints; and we have found a preference for minutia descriptors. Hence, we develop a protocol for evaluating minutia descriptors in a closedset identification. With such a protocol, we determine the merit of nine minutia descriptors suitable for identifying latent fingerprints. As a result, we select four minutia descriptors as candidates for a fusion algorithm and tune their parameters for latent fingerprint identification. Next, we evaluate the four minutia descriptors with their global matching algorithms on subsets of latent fingerprints with good, bad, and ugly qualities. We find two of them reaching the highest identification rates for all subsets and ranks. Therefore, we propose a latent fingerprint identification algorithm with the fusion of these two algorithms using a neural network and four attributes as input, which characterize the fingerprints’ similarity. Experiments show that our proposal improves the baseline algorithms in 13 of 15 datasets created with databases NIST SD27, MOLF-IIITD, and GCBD. Our fusion algorithm reports the highest rank-1 identification rate (71.32%), matching the latent fingerprints in the NIST SD27 against 100,000 fingerprints, using only minutiae. Our algorithm takes six milliseconds to compare a fingerprint pair, which is a good time.
- A comprehensive analysis of behavioural economics applied to social media using automated methods and asymmetric modelling(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2022) Mendoza Urdiales, Román Alejandro; GARCIA MEDINA, ANDRES; 298598; Garcia Medina, Andres; emijzarate; Miguel, Gonzalez Mendoza; Escuela de Graduados en Administración y Dirección de Empresas; Sede EGADE Santa Fe; Nuñez Mora, José AntonioFinancial economic research has extensively documented the fact that the impact of the arrival of negative news on stock prices is more intense than that of the arrival of positive news. The authors of the present study followed an innovative approach based on the utilization of two artificial intelligence algorithms to test that asymmetric response effect. Methods: The first algorithm was used to web scrape the social network Twitter to download the top tweets of the 24 largest market-capitalized publicly traded companies in the world during the last decade. A second algorithm was then used to analyze the contents of the tweets, converting that information into social sentiment indexes and building a time series for each considered company. After comparing the social sentiment indexes’ movements with the daily closing stock price of individual companies using transfer entropy, our estimations confirmed that the intensity of the impact of negative and positive news on the daily stock prices is statistically different, as well as that the intensity with which negative news affects stock prices is greater than that of positive news. The results support the idea of the asymmetric effect that negative sentiment has a greater effect than positive sentiment, and these results were confirmed with the EGARCH model
- A data-driven modeling approach for energy storage systems(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2024-11) Silva Vera, Edgar Daniel; Valdez Resendíz, Jesús Elías; Rosas Caro, Julio César; emipsanchez; Escobar Valderrama, Gerardo; Guillén Aparicio, Daniel; Soriano Rangel, Carlos Abraham; School of Engineering and Sciences; Campus MonterreyThis disertation presents a versatile data-driven modeling methodology designed for various energy systems, including battery-based power systems, DC-DC power electronic converters, Lithium-Ion batteries, and Proton-Exchange Membrane Fuel Cells (PEMFC). The proposed approach captures the non linear dynamics of each system by leveraging fundamental measurements and operational data, thus eliminating the need for explicit theoretical models and significantly simplifying the modeling process. Specifically, the methodology allows for the identification of essential parameters by constructing state-space representations that describe both fast and slow system dynamics, which are crucial for accurately modeling transient behaviors and implementing adaptive control strategies. The models were validated across different applications, showing their ability to replicate real system behaviors with high precision. For instance, in the case of DC-DC converters, the models demonstrated an average error deviation of approximately 2% for current signals and 4% for voltage signals, confirming their capacity to track the actual converter dynamics. Similarly, the Lithium-Ion battery models enabled accurate estimation of state of charge (SoC) and opencircuit voltage using a modified recursive least-squares algorithm, achieving close alignment with real discharge curves. In the PEMFC stack modeling, the methodology utilized real-physic model operational data to refine model accuracy, yielding improved predictive capabilities over traditional approaches. These results underscore the efficacy and robustness of the data-driven approach in enhancing the design, control, and optimization of diverse energy systems. By providing a framework that can be readily adapted to different components and configurations, this methodology supports advancements in sustainable energy technologies, enabling the interconnection of multiple energy storage and conversion systems with minimal computational cost and measurement requirements.
- A Deep Learning-based Algorithm for the Routing Problem in Vehicular Delay-Tolerant Networks(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2020-06-11) Hernández Jiménez, Roberto; CARDENAS PEREZ, CESAR RAUL; 35258; HERNANDEZ JIMENEZ, ROBERTO; 454135; GONZALEZ MENDOZA, MIGUEL; 123361; SOSSA AZUELA, JUAN HUMBERTO; 7036; BUSTAMANTE BELLO, MARTIN ROGELIO; 58810; Cárdenas Pérez, César Raúl; ilquio; González Mendoza, Miguel; Sossa Azuela, Juan Humberto; Bustamante Bello, Martín Rogelio; Escuela de Ingeniería y Ciencias; Campus Estado de México; Muñoz Rodríguez, DavidThe exponential growth of cities across the world has brought along important challenges such as waste management, pollution and overpopulation, and transportation administration. To mitigate these problems, the idea of Smart City was born, seeking to provide robust solutions integrating sensors and electronics, information technologies and communication networks. More particularly, to face transportation challenges, Intelligent Transportation Systems are a vital component in this quest. Intelligent Transportation Systems are intelligent systems that aim at providing the best solution to transportation-related matters, with the aid of information technologies, electrical and electronics and communication networks. In this context, communication networks are called Vehicular Networks, and they offer a communication framework for moving vehicles, road infrastructure and pedestrians. The extreme conditions of vehicular environments, nonetheless, make communication between high-speed moving nodes very difficult, so non-deterministic approaches are necessary to maximize the chances of packet delivery. In this work, this problem is addressed using Artificial Intelligence from a hybrid perspective, focusing on both the best next message to replicate and the best next hop in its path in the network. Furthermore, DLR+ is proposed, a router with a prioritized type of message scheduler and a routing algorithm based on Deep Learning. Simulations done to assess the router performance show important gains in terms of network overhead and hop count, while maintaining an acceptable packet delivery ratio and delivery delays, with respect to other popular routing protocols in vehicular networks.
- A descriptive model for advanced manufacturing technologies use and implementation in Mexican organizations(Instituto Tecnológico y de Estudios Superiores de Monterrey, 1997) Carlos Rafael Mendizábal Pérez; CARLOS RAFAEL MENDIZÁBAL PÉREZ
- A digital twin model with knowledge graph-driven dense captioning(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2024-06) Wajid, Mohammad Saif; Ortiz Bayliss, José Carlos; 212577; Therasima Marin, Hugo; emiggomez, emipsanchez; Ortiz Bayliss, José Carlos; Carrasco Jiménez, José Carlos; Ceballos Cancino, Héctor Gibrán; School of Engineering and Sciences; Campus Monterrey; Najafirad, PeymanThis dissertation is submitted to the Graduate Programs in the School of Engineering and Sciences in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science. This document explores how a digital twin model for the TEC District is developed and how the knowledge graph can be used for dense captioning of security events happening in the TEC District digital twin model. This also describes and analyses factors responsible for security breaches in the city using the concept of neutrosophy. The thesis proposes novel techniques for advancing dense captioning with the integration of knowledge graphs and neutrosophy, harnessing the capabilities of digital twin technology. Digital twins, as virtual replicas of physical entities or systems, offer a comprehensive framework for understanding and simulating real-world scenarios. They have emerged as a powerful tool in various industries, including manufacturing, healthcare, and urban planning. These models rely on detailed simulations of cities, including video data, to analyze and describe various security events using dense captioning. However, the accuracy and relevance of these simulations depend heavily on the quality of the captions generated for the video content. Captioning of videos based on temporal information presents a challenging task, involving the limitation of distracting information over time and space, which is crucial but poses difficulties. Additionally, ensuring robustness to false positives during captioning and addressing storage issues are significant challenges obtained from the literature. Also gathering information from knowledge graphs and providing context is another key task because of the presence of indeterminacy in data. This poses a challenge for defining aggression subjectively and automatically describing events while optimizing classifiers for faster caption generation and selecting optimal parameters. A widely used technique for dense video captioning is a knowledge graph that provides a structured representation of knowledge, organizing and connecting information extracted from videos. By incorporating knowledge graphs into the digital twin model, the relevance and context of the captions are significantly enhanced. However, knowledge graphs may fail to capture indeterminate factors that can dramatically impact situation analysis. Indeterminate factors, such as unpredictable human behavior or environmental conditions, are crucial in determining event sequences in digital twin models. In this dissertation, we aim to create a digital twin model for the TEC District for effective dense captioning of events with the knowledge graph model district area. In the proposed model, knowledge graphs play a crucial role in enhancing the context and relevance of captions by organizing and connecting information extracted from videos. It provides a structured representation of knowledge, enabling a more comprehensive understanding of video content. We have also utilized neutrosophy to address indeterminate and uncertain events, thereby enhancing the efficiency of dense captioning. This work is carried out in three phases; the first phase identifies various traits of character taken from datasets and literature, leading to different events among the masses using Neutrosophic Cognitive Maps (NCMs). This is done to identify the significance of various determinate and indeterminate factors while analyzing the security events. This task was earlier performed using Fuzzy Cognitive Maps (FCMs) in some research domains other than dense video captioning where indeterminate or uncertain factors were not considered. Therefore, we provide a brief comparison between NCMs and FCMs and show how effective NCMs are when considering the uncertainty of concepts while carrying out tests for describing events. In the second phase, a knowledge graph model for dense captioning is developed. As captioning is based on a knowledge graph, the time consumption for generating the video captions was considerably reduced. Also, we used the Bidirectional Long Short-Term Memory (BiLSTM) classifier to analyze the flow of the information provided by the captions, and the efficiency is further enhanced by using the Recurrent Neural Network (RNN). The enabling of the Squacc optimization algorithm in both RNN and BiLSTM effectively optimized the classifier’s parameters and helped to obtain an efficient output. The performance metrics BLEU, ROUGE, CIDEr, METEOR, and SPICE demonstrated the superiority of the research. Later in the third phase, we developed a digital twin for the TEC District, Monterrey, Nuevo Leon, Mexico. We carried out this work by defining and developing five layers in our digital twin model: the ground layer, BIM layer, Mobility infrastructure, district 3D model, and finally, the digital twin. Here, we used some common software applications for the development of TEC District Digital Twin, such as Esri ArcGIS for data management (Map data, GeoJson, and 2D data), City Engine for assigning rule files of buildings, vegetation, water, road network and manipulation of 2D, 3D data, and QGIS for shape files. 3D modeling software Blender, and Nvidia Omniverse for the final digital twin was used. Using the potential of these tools and techniques, Digital Twin is proposed for the buildings, road network, and vegetation of the TEC District (Tecnologico De Monterrey District) region. Here, we integrated our dense captioning model with the TEC Distrcit digital twin to obtain captions of security events using knowledge graphs. The general idea of this investigation is to provide a better understanding of digital twins and dense video captioning. By leveraging the capabilities of these technologies, organizations can generate more accurate and insightful analyses of digital twin models, enabling a wide range of applications in various fields. These technologies will also aid surveillance and security in urban planning, offering significant benefits for organizations looking to optimize their operations and enhance their decision-making processes. All the models described in this investigation can be applied to a wider range of instances to achieve acceptable results with respect to time and quality.
- A framework to create bottom-up energy models that support policy-design and decision-making of electricity end-use efficiency: A case study in residential buildings and the residential sector(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2022-11-28) Sánchez Escobar, Marlene Ofelia; SANCHEZ ESCOBAR, MARLENE OFELIA; 716985; Noguez Monroy, Juana Julieta; puemcuervo, emipsanchez; Ruiz Loza, Sergio; Valverde Rebaza, Jorge Carlos; Matus Castillejos, Abel; School of Engineering and Sciences; Campus Ciudad de México; Molina Espinosa, José MartínIn this work we present a framework that guides the creation of bottom-up energy models (BUEMs) that aim to support policy design of electricity end-use efficiency. Energy models are decision-making tools for policy-makers and they are key tools to evaluate decisions. However, research reveals that bottom-up energy models are empirically created and they are not designed to guarantee support towards policy design. Likewise, the use of scenarios has not been applied as a standard technique within BUEMs. Thus, is it possible to align these models to specific policy goals and standardize the use of scenarios in them through the application of a process? The framework proposed in this work includes phases, processes, and artifacts that conduct the modeler through the model's construction process. The processes incentive best practices for policy design for the residential sector and residential buildings. To build the proposed framework, we first research the characteristics of BUEMs and their opportunity areas. Then, we propose processes that are important for the model's design but we consider the application of best practices to overcome the problems encountered in the models. After that, we execute the framework and record all the events for future analysis. To finish, we assess the process execution quantitatively and qualitatively using the process mining technique. The results of the framework's application are promising by outperforming other methodologies in the literature. In fact, it has been proved that model's creation time can be diminished with the application of the framework. Likewise, the framework promotes: (1) policy alignment from the start to the end of the model's development process and (2) the definition of scope boundaries. Moreover, it brings transparency to the process by the use of the proposed templates. On the other hand, the utilization of the process mining technique to create a process model has brought advantages as well. For instance, (1) it is possible to monitor, control, and enhance the model's construction process and (2) The processes compliance can be evaluated and adaptations can be recommended with quantitative evidence.
- A generalist reinforcement learning agent for compressing multiple convolutional neural networks(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2024-12-11) González Sahagún, Gabriel; Conant ablos, Santiago Enrique; emipsanchez; Ortíz Bayliss, José Carlos; Cruz Duarte, Jorge Mario; Gutiérrez Rodríguez, Andrés Eduardo; School of Engineering and Sciences; Campus MonterreyDeep Learning has achieved state-of-the-art accuracy in multiple fields. A common practice in computer vision is to reuse a pre-trained model for a completely different dataset of the same type of task, a process known as transfer learning, which reduces training time by reusing the filters of the convolutional layers. However, while transfer learning can reduce training time, the model might overestimate the number of parameters needed for the new dataset. As models now achieve near-human performance or better, there is a growing need to reduce their size to facilitate deployment on devices with limited computational resources. Various compression techniques have been proposed to address this issue, but their effectiveness varies depending on hyperparameters. To navigate these options, researchers have worked on automating model compression. Some have proposed using reinforcement learning to teach a deep learning model how to compress another deep learning model. This study compares multiple approaches for automating the compression of convolutional neural networks and proposes a method for training a reinforcement learning agent that works across multiple datasets without the need for transfer learning. The agents were tested using leaveone- out cross-validation, learning to compress a set of LeNet-5 models and testing on another LeNet-5 model with different parameters. The metrics used to evaluate these solutions were accuracy loss and the number of parameters of the compressed model. The agents suggested compression schemes that were on or near the Pareto front for these metrics. Furthermore, the models were compressed by more than 80% with minimal accuracy loss in most cases. The significance of these results is that by escalating this methodology for larger models and datasets, an AI assistant for model compression similar to ChatGPT can be developed, potentially revolutionizing model compression practices and enabling advanced deployments in resource-constrained environments.
- A methodology for prediction interval adjustment for short term load forecasting(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2020-12) Zúñiga García, Miguel Ángel; Batres Prieto, Rafael; hermlugo; Santamaría Bonfil, Guillermo (Co-advisor); Noguel Monroy, Juana Julieta; Ceballos Cancino, Héctor Gibrán; School of Engineering and Sciences; Campus Estado de México; Arroyo Figueroa, GustavoElectricity load forecasting is an essential tool for the effective power grid operation and for energy markets. However, the lack of accuracy on the estimation of the electricity demand may cause an excessive or insufficient supply which can produce instabilities in the power grid or cause load cuts. Hence, probabilistic load forecasting methods have become more relevant since these allow to understand, not only load point forecasts but also the uncertainty associated with it. In this thesis, a framework to generate prediction models that generate prediction intervals is proposed. This framework is designed to create a probabilistic STLF model by completing a series of tasks. First, prediction models will be generated using a prediction method and a segmented time series dataset. Next, prediction models will be used produce point forecast estimations and errors will be registered for each subset. At the same time, an association rules analysis will be performed in the same segmented time series dataset to model cycling patterns. Then, with the registered errors and the information obtained by the association rules analysis, the prediction intervals are created. Finally, the performance of the prediction intervals is measured by using specific error metrics. This methodology is tested in two datasets: Mexico and Electric Reliability Council of Texas (ERCOT). Best results for Mexico dataset are a Prediction Interval Coverage Probability (PICP) of 96.49% and Prediction Interval Normalized Average Width 12.86, and for the ERCOT dataset a PICP of 94.93% and a PINAW of 3.6. These results were measured after a reduction of 14.75% and 5.25% in the prediction intervals normalized average width of the Mexico and ERCOT dataset respectively. Reduction of the prediction interval is important because it can helps in reducing the amount of electricity purchase, and reducing the electricity purchase even in 1% represents a large amount of money. The main contributions of this work are: a framework that can convert any point forecast model in a probabilistic model, the Max Lift rule method for selection of high quality rules, and the metrics probabilistic Mean Absolute Error and Root Mean Squared Error.
- A minutiae-based indexing algorithm for latent palmprints(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2024-12-11) Khodadoust, Javad; Monroy Borja, Raúl; emipsanchez; Aparecida Paulino, Alessandra; Valdes Ramírez, Danilo; Rodríguez Ruiz, Jorge; School of Engineering and Sciences; Campus Monterrey; Medina Pérez, Miguel ÁngelToday, many countries rely on biometric traits for individual authentication, necessitating at least one high-quality sample from each person. However, countries with large populations like China and India, as well as those with high visitor and tourist volumes like France, face challenges such as data storage and database identification. Latent palmprints, comprising about one-third of prints recovered from crime scenes in forensic applications, require inclu sion in law enforcement and forensic databases. Unlike fingerprints, palmprints are larger, and features such as minutiae are approximately ten times more abundant, accompanied by more prominent and wider creases. Consequently, accurately and efficiently identifying la tent palmprints within stored reference palmprints poses significant challenges. Using fre quency domain approaches and deep convolutional neural networks (DCNNs), we present a new palmprint segmentation method in this work that can be used for both latent and full impression prints. The method creates a binary mask. Additionally, we introduce a palmprint quality estimation technique for latent and full impression prints. This method involves parti tioning each palmprint into non-overlapping blocks and considering larger windows centered on each block to derive frequency domain values, effectively accounting for creases and en hancing overall quality mapping. Furthermore, we present a region-growing-based palmprint enhancement approach, starting from high-quality blocks identified through our quality es timation method. Similar to the quality estimation process, this method operates on blocks and windows, transforming high-quality windows into the frequency domain for processing before reverting to the spatial domain, resulting in improved neighboring block outcomes. Finally, we propose two distinct minutiae-based indexing methods and enhance an existing matching-based indexing approach. Our experiments leverage three palmprint datasets, with only one containing latent palmprints, showcasing superior accuracy compared to existing methods
- A new methodology for inverse kinematics and trajectory planning of humanoid biped robots(Instituto Tecnológico y de Estudios Superiores de Monterrey) Rodriguez, Alejandro; Soto Rodríguez, Rogelio; School of Engineering and Sciences; Campus MonterreyThis dissertation presents a new methodology for Inverse Kinematics and Trajectory Planning for small-sized humanoid biped robots. Regarding the Inverse Kinematics, this study presents an explicit, omnidirectional, analytical, and decoupled closed-form solution for the lower limb kinematics of the humanoid robot NAO. It starts by decoupling the position and orientation analysis from the concatenation of Denavit-Hartenberg (DH) transformation ma- trices. Here, the joint activation sequence for the DH matrices is mathematically constrained to follow the geometry of a triangle. Furthermore, the implementation of a forward and a reversed kinematic analysis for the support and swing phase equations is developed to avoid the complexity of matrix inversion. The allocation of constant transformations allows the position and orientation end-coordinate systems to be aligned with each other. Also, the re- definition of the DH transformations and the use of constraints allows for the decoupling the shared Degree of Freedom (DOF) located between the legs and the torso; and which activates the torso and both of the legs when a single actuator (the hip-yaw joint) is activated. Further- more, a three dimensional geometric analysis is carried out to avoid the singularities during the walking process. Numerical data is presented along with experimental implementations to prove the validity of the analytical results. In relation to the trajectory planning, a method taken from manipulator robot theory is applied to humanoid walking. Fifth and seventh order polynomials are proposed to define the trajectories of the Center of Gravity (CoG) and the swing foot. The polynomials are designed so that the acceleration and jerk are constrained to be zero particularly at two moments: at the single support phase (when the robot is standing on a single foot), and at the foot landing (to prevent foot-to-ground impacts); thus, minimizing internal disturbance forces. Computer simulations are performed to compare the effects of the acceleration and jerk constraints. In addition, the basics of the future work is given by providing a control model for robot equilibrium. First, the analysis of this model starts with a static equilibrium model which reacts to an ankle perturbation by using a hip actuation. Second, a dynamic model is proposed which incorporates the ground perturbations into the robot model by representing the ground tilt as an additional, passive, and redundant DOF located at the ankle. This procedure allows for two separate models (the one corresponding to the humanoid and the one corresponding to the ground) to be accounted into a single model, thus, minimizing complexity.
- A new supervised learning algorithm inspired on chemical organic compounds(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2013) Ponce Espinosa, Hiram Eredín.; PONCE ESPINOSA, HIRAM EREDIN; 376768In this work, a new supervised learning method called artificial organic networks is proposed for modeling problems, i.e. fitting, analyzing, inference and classification. In fact, this technique is inspired on chemical organic compounds due to their characteristics of stability, encapsulation, inheritance, organization, and robustness. Additionally, this work presentsartificial hydrocarbon networks, a supervised learning algorithm inspired on chemical hydrocarbon compounds and proposed under artificial organic networks technique.
- A novel feature extraction methodology using Inter-Trial Coherence framework for signal analysis – A case study applied towards BCI(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2024-11) López Bernal, Diego; Ponce Cruz, Pedro; emipsanchez; Ponce Espinosa, Hiram; López Caudana, Edgar Omar; Bustamante Bello, Martín Rogelio; School of Engineering and Sciences; Campus Ciudad de México; Balderas Silva, David ChristopherSignal classification in environments with low signal-to-noise ratio (SNR) presents a significant challenge across various fields, from industrial monitoring to biomedical appli cations. This work explores a novel methodology aimed at improving classification accuracy in such conditions, using EEG-based Brain-Computer Interfaces (BCIs) for inner speech decoding as a case study. EEG-based Brain-Computer Interfaces (BCIs) have emerged as a promising technology for providing communication channels for individuals with speech disabilities, such as those affected by amyotrophic lateral sclerosis (ALS), stroke, or other neurodegenerative diseases. Inner speech classification, a subset of BCI applications, aims to interpret and translate silent, inner speech into meaningful linguistic information. De spite the potential of BCIs, current methodologies for inner speech classification lack the accuracy needed for practical applications. This work investigates the use of inter-trial coherence (ITC) as a novel feature extraction technique to enhance the accuracy of in ner speech classification in EEG-based BCIs. The study introduces a methodology that integrates ITC within a complex Morlet time-frequency representation framework. EEG recordings from ten participants imagining four distinct words (up, down, right, and left) were processed and analyzed. Five different classification algorithms were evaluated: Ran dom Forest (RF), Support Vector Machine (SVM), k-Nearest Neighbors (kNN), Linear Discriminant Analysis (LDA), and Naive Bayes (NB). The proposed method achieved no table classification accuracies of 75.70% with RF and 66.25% with SVM, demonstrating significant improvements over traditional feature extraction methods. These findings indi cate that ITC is a viable technique for enhancing the accuracy of inner speech classification in EEG-based BCIs. The results suggest practical implications for improving communica tion and navigation capabilities for individuals with ALS or similar conditions. This work lays the foundation for future research on phase-based feature extraction, opening new avenues for understanding the neural mechanisms underlying inner speech and advancing BCI systems’ accuracy and efficiency
- A novel functional tree for class imbalance problems(Instituto Tecnológico y de Estudios Superiores de Monterrey, 2022-11) Cañete Sifuentes, Leonardo Mauricio; CAÑETE SIFUENTES, LEONARDO MAURICIO; 787723; Monroy Borja, Raúl; puemcuervo, emipsanchez; Morales Manzanares, Eduardo; Gutiérrez Rodríguez, Andrés Eduardo; Cantú Ortiz, Francisco; Conant Pablos, Santiago; School of Engineering and Sciences; Campus Estado de México; Medina Pérez, Miguel AngelDecision trees (DTs) are popular classifiers partly because they provide models that are easy to explain and because they show remarkable performance. To improve the classification performance of individual DTs, researchers have used linear combinations of features in inner nodes (Multivariate Decision Trees), leaf nodes (Model Trees), or both (Functional Trees). Our general objective is to develop a DT using linear feature combinations that outperforms the rest of such DTs in terms of classification performance as measured by the Area Under the ROC Curve (AUC), particularly in class imbalance problems, where one of the classes in the database has few objects compared to another class. We establish that, in terms of classification performance, there exists a hierarchy, where Functional Trees (FTs) surpass Model Trees, that in turn surpass Multivariate Decision Trees. Having shown that Gama's FT, the only FT to date, has the best classification performance, we identify limitations that hinder its classification performance. To improve the classification performance of FTs, we introduce the Functional Tree for class imbalance problems (FT4cip), which takes care in each design decision to improve AUC. The decision of what pruning method to use led us to the design of the AUC-optimizing Cost-Complexity pruning algorithm, a novel pruning algorithm that does not degrade classification performance in class imbalance problems because it optimizes AUC. We show how each design decision taken when building FT4cip contributes to classification performance or to simple tree models. We demonstrate through a set of tests that FT4cip outperforms Gama's FT and excels in class imbalance problems. All our results are supported by a thorough experimental comparison in 110 databases using Bayesian statistical tests.
- A prescriptive model for supply chain integration : an evolutionary approachCanfield Rivera, Carlos Eduardo; Carlos Eduardo Canfield Rivera; Gaytán Iniestra, Juan; Juan Gaytán Iniestra