​​​Natalia Cherezova​

Projects

Year: 2022 - 2026
CRASHLESS aims at radically new cross-layer reliability and self-health awareness technology for tomorrow's intelligent autonomous systems and IoT edge devices in Estonia and EU. The enormous complexity of today's advanced cyber-physical systems and systems of systems is multiplied by their heterogeneity and the emerging computing architectures employing AI-based autonomy. The setups, such as autonomous swarms of robotic vehicles, are already on the doorstep and call for novel approaches for reliability across all the layers. Continuous self-health awareness and infrastructure for in-field self-healing are becoming an enabling factor for new IoT edge devices and systems on the way to market. The new deep-tech by CRASHLESS equips engineers with design-phase solutions and in-field instruments for industry-scale systems and, ultimately, facilitates the user experience of the system’s crashless operation. The results are to be validated in close collaboration with Estonian companies.
Year: 2023 - 2024
The objective of this collaborative project is to enable trustworthy AI hardware by explainable and efficient Deep Neural Networks. As the main contribution to achieving this objective, the project will establish an EnTrustED Framework for DNN hardware design analysis that will follow the novel design flow. First, at the design-time phase, a combination of DNN-tailored AxC techniques will be provided to enhance the compute-efficiency of the DNN inference hardware. The Framework will enable a simulation-based analysis for identifying the neurons that are not practical for the optimisation and must keep their initial Exact Computing (ExC) implementation or the approximation should be reduced. It aims to equip the AI hardware with self-test mechanisms to detect hardware errors and fault-tolerance mechanisms for recovering from an error that has occurred and, thereby, continue the AI algorithm uninterruptedly. As one of the novelties, this project views eXplainable AI (XAI) from a hardware perspective. We intend to consider the AxC during the explainability and thus ensure a correct explanation of the decision taken by the DNN. Once we guarantee the correct behaviour of the hardware, and we have properly considered AxC, we can safely run explainability approaches to profile the DNN implantation still at the design-time for identification of DNN input specific significant neurons. The experimental nature of the project and the high interdependency of the contributions by EC-Lyon and TalTech make the envisioned face-to-face visits essential to achieving the goals of the collaboration.