Friday April 24th 2026 2:00 PM - 4:00 PM Location FMH 462 Cost / Admission Contact mathstatadmin@pdx.edu Share Facebook Twitter Add to my calendar Add to my Calendar iCalendar Google Calendar Outlook Outlook Online Yahoo! Calendar Title: A Unified Framework for Explainability and Interpretability of Neural Networks via Hyperparameter-Extended Influence FunctionsAbstract:Understanding how model predictions and training outcomes vary with changes in data, features, and modeling choices is central to explainable artificial intelligence. This dissertation introduces a unified framework for explainability by generalizing classical influence functions to encompass user-defined hyperparameters embedded in the training loss, model architecture, or data representation. By extending influence functions in this way, the framework broadens their applicability and integrates multiple explainability techniques into a single, coherent approach. It provides a common mathematical foundation linking data impact, feature importance, and model design analysis, and supports a broad class of additional explainability analyses beyond these settings. The demonstrated applications are validated across diverse neural network settings. In addition, we evaluate the practical feasibility of the proposed framework for large-scale models. Collectively, this work positions sensitivity analysis via hyperparameter extended influence functions as a practical and scalable basis for explainability in modern machine learning systems. presentation