The NSF ACCESS Regional AI Workshop – SoCal Edition invites researchers, educators, and students from across Southern California who are using - or curious about using - AI and advanced computing in their work. Whether you’re part of the ACCESS program, exploring NAIRR resources, or simply interested in practical AI tools and workflows, this free one-day, in-person event is for you.

This ACCESS-Support led workshop will include presentations on the use of AI for research and education and provide an overview of NAIRR-Pilot, connecting practitioners in the Southern California region using the NAIRR-Pilot ecosystem. It will explore how to make the most of NAIRR allocations, highlight practical tools and workflows, and share strategies for advancing research across disciplines with AI. Participants will gain insights into best practices, hear about success stories from the community, and connect with peers to exchange ideas and foster collaboration.

The NAIRR-Pilot program is NSF’s flagship program about giving access to commercial and academic CI resources to researchers looking to conduct research in AI or applying AI to their science or education.

This workshop offers a unique opportunity to strengthen your AI skills, broaden your network, and become part of the growing regional AI community. The workshop will provide an opportunity to present lightning talks or posters.

This is an application to attend. Space is limited to 100 participants. If there are more applications than space, applications will be selected based on the responses in the applications.

Applications are now closed

Lodging Information

SOLD OUT - USC Hotel: Link to book
Distance from USC Ginsburg Hall: .6 miles
Address: 3540 S Figueroa Street, Los Angeles, CA 90007
Google Map Directions

Hotel Figueroa : Link to book
Distance from USC Ginsburg Hall: 3 Miles
Address: 939 S Figueroa St, Los Angeles, CA 90015
Google Map Directions

Courtyard by Marriot LA Live : Link to book
Distance from USC Ginsburg Hall: 3.3 Miles
Address: 901 W Olympic Blvd, Los Angeles, CA 90015
Google Map Directions

How it Started

In April 2025, NAIRR held “AI Unlocked: Empowering Higher Education through Research and Discovery” in Denver, Colorado with about 350 attendees. Based on the success of the workshop, it was decided to hold NAIRR smaller regional focused workshops limited to about 100 attendees.The first one was RMACC (see agenda here) in Colorado in August 2025. A second workshop was hosted in Kentucky early October 2025. USC/ISI is organizing the Southern California Region workshop in January 2026.

Agenda

Time Topic
8:00 - 9:00 am Check in and breakfast
9:00 - 9:10 am Welcome - Ewa Deelman, University of Southern California
9:10 - 10:40 am AI on Campus
9:10 - 9:40 am
How Generative AI Is Reshaping Learning, Agency, and Equity in Higher Education Worldwide, Stephen J. Aguilar, University of Southern California

Abstract

This talk draws on international, large-scale research to examine how students and educators in higher education are using generative AI as a tool for learning, help-seeking, and decision-making. I distinguish between instrumental uses of AI that support agency and understanding and executive uses that risk displacing human judgment, and I show how institutional context and policy shape these patterns across countries. The talk concludes with implications for designing AI-enabled higher education that strengthens, rather than substitutes for, human intelligence.

Presenter

Stephen J. Aguilar

Stephen J. Aguilar, University of Southern California

Dr. Stephen J. Aguilar is an Associate Professor of Education at the USC Rossier School of Education and co-leads USC’s Center for Generative AI and Society. His research focuses on investigating how educational technologies influence teaching, learning, and motivation.

His work has been funded by the National Science Foundation, the American Educational Research Association (AERA), the National Institutes of Health, and the U.S. Army Research Office. Dr. Aguilar has been guest on NPR’s AirTalk, and has been interviewed by the Los Angeles Times, The New York Times, USA Today, The Atlantic, Bloomberg, and The Washington Post on the topic of generative AI’s effects on education.

9:40 - 10:10 am
Artificial Intelligence’s Transformative Research Methods and Techniques in the Digital Humanities, Danielle Mihram, University of Southern California

Abstract

The term “artificial intelligence” was coined in 1956 by John McCarthy, a Dartmouth College professor, at the Dartmouth Summer Research Project on Artificial Intelligence (June 18-August 17, 1956). Very early computational methods in the Digital Humanities (DH) primarily focused on text analysis using tools for concordances, lexical statistics, and stylometry. These methods and techniques were pioneered by Roberto Busa’s Project, Index Thomisticus (a concordance to 179 texts centering around Thomas Aquinas) which was begun in the 1940’s. Since then, projects can be traced back to the 1960s and 1970s and they include additional key methods and techniques such as early forms of text encoding and markup for creating scholarly editions and the analysis of language evolution through word usage and grammatical patterns. The extensive integration of Artificial Intelligence (AI) into the field of Digital Humanities (DH) began in the late 1990s and early 2000s as computational power increased, and this integration grew significantly, becoming a central part of DH by 2020 due to advancements in AI techniques like Natural Language Processing (NLP), machine learning, and image recognition, which allow for the analysis of large datasets that would be impractical for manual methods.

These advancements can be seen as a pivotal research event, signaling a transformation in the approaches to studying human culture and history, and it is reshaping the traditional ways in which we conduct research, analyze information, and share insights. AI enables researchers to analyze large amounts of data and uncover patterns and insights at speeds previously unattainable, allowing for the creation of more dynamic ways to discover and present historical and cultural content to a potentially broader audience. In this presentation we shall take a look at key techniques and methods currently used in AI-focused research in the Digital Humanities and we shall examine illustrative case studies.

Presenter

Danielle Mihram

Danielle Mihram, University of Southern California

Danielle Mihram is a University Librarian (rank equivalent to Full Professor) at the University of Southern California [USC] Libraries where she has been a faculty member since 1989. Prior to USC, she was a member of the faculty of several academic institutions, including the University of Sydney (Australia), Swarthmore College, Haverford College, the University of Pennsylvania, and New York University. She holds a B.A. Honors from the University of Sydney; a Ph.D. from the University of Pennsylvania; and a Master of Library Science (MLS) from Rutgers University. Since her arrival at USC Libraries, she has held several high-level administrative positions. In 1996 she was appointed as the first full-time Director of USC’s Center for Excellence in Teaching [CET] (Provost Office; from 1996 to 2007) in view of her many years of teaching and mentoring experience, as well as her knowledge of information science. She remains a member of CET as one of its Distinguished Faculty Fellows. Danielle's research interests are multidisciplinary, and they have led to over a hundred publications and presentations. Her current research interests focus on the contributions of the digital humanities to the advancement of human knowledge and the transformative effects of artificial intelligence in research and scholarship. She was awarded several USC grants, as well as two USC Libraries’ Research Funds, the latter resulting in her leading two Digital Humanities Projects: USC Digital Voltaire (2017) and USC Illuminated Medieval Manuscripts (work in progress). She is the recipient of several awards: The Outstanding Scholarly Achievement Award (2003) and the Innovation Award on Teaching and Research (2005), both from the International Institute for Advanced Studies in Systems Research and Cybernetics (Baden-Baden, Germany); the USC Mellon Award for Excellence in Mentoring (2005); and the USC Academic Senate’s Distinguished Faculty Service Award (2008).

10:10 - 10:40 am
AI for All - Nabeel Alzahrani, California State University, San Bernardino

Abstract

AI for All is an introduction to Artificial Intelligence (AI), Machine Learning (ML), Deep Learning (DL), Generative AI (genAI), and Large Language Models (LLMs). The session highlights real-world applications and ethical considerations, empowering both STEM and non-STEM audiences to engage thoughtfully with AI technologies. Participants will gain a foundational understanding of key AI concepts, explore how AI is transforming fields such as education and healthcare, and discuss critical issues of fairness, transparency, and bias.

Presenter

Nabeel Alzahrani

Dr. Nabeel Alzahrani, California State University, San Bernardino (CSUSB)

Dr. Alzahrani is an adjunct professor of Computer Science and Engineering at California State University, San Bernardino (CSUSB), specializing in artificial intelligence (AI), high-performance computing (HPC), and cybersecurity. He earned his Ph.D. in Computer Science from the University of California, Riverside. Dr. Alzahrani also serves as a consultant in the Identity, Security, and Enterprise Technology Department at CSUSB. He is the co-founder of the Artificial Intelligence, Quantum Computing, Fusion Energy, and Semiconductors (AQFS) Research and Training Lab at CSUSB. In addition, he is a published author of books and research papers and has delivered numerous presentations in his field.

10:40 - 11:00 am Break
11:00 - 12:30 pm AI Resources
11:00 - 11:30 am
Introduction to NAIRR and ACCESS - Empowering Research and Education with Advanced Computing Resources, Shelly Knuth, University of Colorado Boulder

Abstract

This talk will go over the resources available to the research community as part of the National Artificial Intelligence Research Resource (NAIRR) Pilot and the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) projects.

Presenter

Shelly Knuth

Shelly Knuth, University of Colorado Boulder

Shelley is the Assistant Vice Chancellor for Research Computing at the University of Colorado Boulder. She oversees advanced computing and data services that support researchers nationwide, including supercomputing, large-scale data storage, secure enclaves, and high-speed networking. She also serves as Executive Director of the Center for Research Data and Digital Scholarship (CRDDS) and chairs the Rocky Mountain Advanced Computing Consortium (RMACC), fostering collaboration across the region.

Shelley is the lead principal investigator for the NSF-funded ACCESS Support project and contributes to several other NSF initiatives. Additionally, she helps guide national strategy as co-lead of the User Experience Working Group for the National Artificial Intelligence Research Resource (NAIRR) pilot.

She earned her PhD in Atmospheric and Oceanic Sciences from CU Boulder in 2014.

11:30 - 12:00 pm
Getting Access to NAIRR Pilot Resources, Maytal Dahan, University of Texas at Austin

Abstract

This talk guides participants through the process of accessing resources from the National AI Research Resource (NAIRR Pilot), emphasizing preparation, selection, and proposal submission. Key topics include:

  1. Preparation for Submitting a Proposal:
    • Defining the project scope and running test simulations using a sandbox to identify resource needs.
    • Evaluating computational requirements (e.g., CPU/GPU, memory) and necessary applications based on preliminary tests.
  2. Matching Resources:
    Exploring computational resources and determining the best match for specific project needs.
  3. Submitting an Allocation Request:
    Step-by-step demo with guided, hands-on practice.
  4. Support and Guidance:
    Leveraging office hours, ticket systems (NAIRR Pilot or Resource Provider), and consultations for personalized assistance.

Presenter

Maytal Dahan

Maytal Dahan, University of Texas at Austin

Maytal Dahan is the Director of Advanced Computing Interfaces (ACI) at the Texas Advanced Computing Center (TACC) at The University of Texas at Austin. She leads efforts to design and deploy cyberinfrastructure platforms and science gateways that broaden access to computing and data for a wide range of research communities. With over two decades of experience in software engineering and research computing, Maytal has been a key contributor to projects such as Tapis, SGX3, and XSEDE and more.

12:00 - 12:30 pm
AI Infrastructure for All - Frank Würthwein, San Diego Supercomputer Center

Abstract

The National Research Platform (NRP) provides a national scale AI infrastructure for education and research that enables researchers and their institutions to own their own AI infrastructure without having to operate it. It provides AI infrastructure management across more than 100 data centers today. The user interfaces NRP offers include Jupyter Notebooks, LLM chat and API access, the native Kubernetes API, the National Data Platform UI/UX, and HTCondor via NRP integration with the OSPool managed by PATh. Dozens of colleges nationwide use the platform to bring digital assets into the classroom, including data, compute, and AI tools.

We will give an overview what the NRP provides to students, educators, researchers, and institutions, including a “walk through” the training materials, and other support mechanisms for people to get started.

Presenter

Frank Wurthwein

Frank Würthwein, San Diego Supercomputer Center

Frank Würthwein is the Director of the San Diego Supercomputer Center. He holds faculty appointments at UC San Diego in the Physics Department and the Halıcıoğlu Data Science Institute. After receiving his Ph.D. from Cornell in 1995, he held appointments at Caltech, MIT and Fermi National Laboratory, before joining the UC San Diego faculty in 2003. His research focuses on globally distributed compute and data systems (e.g., OSG, NRP, OSDF), experimental particle physics and distributed high-throughput computing. As an experimentalist, he is interested in instrumentation and data analysis. In the last couple decades, this meant developing, deploying and operating worldwide distributed computing systems that support processing and analysis of large data volumes. In 2010, "large" data volumes were measured in Petabytes. By 2030, they are expected to grow to Exabytes.

12:30 - 1:30 pm Lunch
1:30 - 2:30 pm Lightning Talks: What Can You Do With AI? (10 min talks, 5 min Q/A)
1:30 - 1:45 pm
Too Smart to be Human: Can AI Agents Replace Us in Behavioral Experiments? - John Garcia, California Lutheran University

Abstract

Can AI replace human subjects? Researchers are increasingly using models such as GPT-4 as surrogates for humans because they are cheaper and faster; however, do they behave like us? To find out, I built 96 AI "retail investors" and unleashed them in a stock market simulation, exposing them to viral "meme stock" buzz while holding financial fundamentals constant. The results were striking: When human retail investors see viral hype, they buy (+30–50%); my AI retail investor agents did the opposite, decreasing buying by 45%. While humans famously hold on to losing investments for too long, my agents sold losers three times faster than they sold winners. They acted exactly like financial textbooks say we should, and exactly unlike real people do. I call this "Hyper-Rationality." AI models are trained on vast amounts of advice: "avoid bubbles," "cut your losses." They prioritize logical training over character instruction; even when explicitly programmed to experience "FOMO," they calculated the transaction costs and rationally refrained from trading. The implication: AI can simulate how we should behave, but it lacks the emotional software to replicate how we actually behave.

1:45 - 2:00 pm
Deepfakes, Data, and Democracy: Artificial Intelligence in Political Life - Michael Ault, California State University, Bakersfield

Abstract

I explore how artificial intelligence is transforming politics and political communication, from government regulation and global power struggles to the future of democracy itself. I examine historical efforts to regulate disruptive technologies alongside contemporary debates over AI policy in the U.S., Europe, and China. Through case studies of recent elections, I also investigate how AI tools (i.e., from data analytics to deepfakes) are reshaping campaigns, media narratives, and voter trust. Ethical challenges such as surveillance, bias, and accountability are also analyzed alongside questions of global competition and control. Overall, I seek to critically assess who governs in the age of algorithms and what that means for justice, democracy, and political power.

2:00 - 2:15 pm
AI Agents - Prakashan Korambath, University of California, Los Angeles

Abstract

AI agents represent a significant evolution beyond traditional chatbots and simple question-answering systems. They aren't merely delivering static information; they are dynamic entities that can reason, act, and collaborate to solve complex problems and automate tasks, often with minimal or no human intervention. This shift is powered by their ability to leverage external tools in the form of APIs that provide access to dynamic, real-world information. By bridging knowledge gaps and generating new insights, AI agents are poised to fundamentally change how we interact with technology and automate workflows across every industry. Also, tools developed by different model providers can interact well using Model Context Protocol (MCP) with client server architecture to enhance usage of Agentic AI concepts in real time and real data.

2:15 - 2:30 pm
AI-Driven Framework for Personalized Insulin Dosing and Safer Diabetes Management - Yash Kishorbhai Pansheriya, University of California, Los Angeles

Abstract

Type 1 Diabetes (T1D) management demands constant monitoring and real-time decision making, yet traditional insulin dosing formulas remain static and poorly suited to unpredictable conditions such as skipped meals or variable physical activity. This research introduces a machine-learning-based framework for personalized insulin recommendations that adapts dynamically to patient-specific data.

The framework integrates continuous glucose monitoring, insulin on board, carbohydrate intake, and physical activity to predict short-term glucose levels and identify glycemic risk zones. Based on these predictions, the system generates adaptive insulin or nutrition recommendations derived from clinical principles but tailored to each user’s condition.

To improve transparency and accessibility, a Retrieval-Augmented Generation (RAG)–based large language model is integrated as an interactive chatbot interface, translating model insights into patient-specific explanations.

The talk will discuss the design, modeling workflow, and the integration of explainable AI and conversational systems to enhance reliability, interpretability, and real world usability in diabetes management.

2:30 - 3:30 pm AI Ready Data
2:00 - 2:30 pm
Sage Grande: An AI Testbed for Edge Computing - Pete Beckman, Northwestern University

Abstract

Sage Grande is national-scale cyberinfrastructure designed for AI-driven edge computing. With more than 100 nodes deployed across diverse environments—from Chicago’s urban streets to national parks—Sage enables students and scientists to develop and deploy AI applications in the field. By integrating sensors such as cameras, microphones, and LiDAR with AI-driven computation, researchers can build novel systems for tasks like wildfire detection, agricultural monitoring, bioacoustic analysis, and understanding urban dynamics.

Presenter

Pete Beckman

Pete Beckman, Northwestern University

Pete Beckman is a recognized global expert in high-end computing systems. During the past 25 years, he has designed and built software and architectures for large-scale parallel and distributed computing systems. Peter helped found Indiana University’s Extreme Computing Laboratory. He also founded the Linux cluster team at the Advanced Computing Laboratory, Los Alamos National Laboratory and a Turbolinux-sponsored research laboratory that developed the world’s first dynamic provisioning system for cloud computing and HPC clusters. Furthermore, he acted as vice president of Turbolinux’s worldwide engineering efforts.

Pete joined Argonne National Laboratory in 2002. As director of engineering and chief architect for the TeraGrid, he designed and deployed the world’s most powerful Grid computing system for linking production high performance computing centers for the National Science Foundation.

He served as director of the Argonne Leadership Computing Facility from 2008 to 2010. He is currently a Senior Computer Scientist and co-Director of the Northwestern Argonne Institute of Science and Engineering. He is also a co-founder of the International Exascale Software Project (IESP).

2:30 - 3:00 pm
From Promise to Practice: Reimagining Resilient Agriculture Through AI - Nirav Merchant, University of Arizona

Abstract

This talk will cover how a NAIRR allocation was used to build the foundation model for InsectNet and how it is being made accessible to the Ag community and lessons learned.

Presenter

Nirav Merchant

Nirav Merchant, University of Arizona

Nirav Merchant serves as the Director of the Data Science Institute. For the past three decades at the University of Arizona, his research has been focused on the development of scalable computational platforms (cyberinfrastructure) in support of open science projects. His work is primarily directed towards reducing the socio-technical barriers in adoption of emerging computational and information sciences advances by domain sciences.

His interests encompass large-scale data management platforms, data delivery technologies, cloud native methodologies, secure data analysis enclaves, and the use of managed sensors and wearables for health interventions. He is passionate about developing learning material for informed adoption and utilization of Machine Learning (ML) and Artificial Intelligence (AI) based analysis methods into course work and for workforce development.

He serves as the principal investigator for NSF CyVerse, a national scale Cyberinfrastructure and Co-principal investigator for NSF Jetstream the first user-friendly, scalable cloud environment for NSF XSEDE/ACCESS. He leads the cyberinfrastructure team for the NSF & USDA funded National Artificial Intelligence Institute for Resilient Agriculture (AIIRA)

3:00 - 3:30 pm
Generative Artificial Intelligence and Deep Learning Using NAIRR Reveal Brain Aging Trajectories Before Alzheimer's Disease - Andrei Irimia, PhD, University of Southern California

Abstract

Understanding why individuals age differently at the level of the brain is a central question in neuroscience and medicine. Our research leverages large-scale neuroimaging datasets and artificial intelligence to quantify the pace and pattern of brain aging from structural MRI. Using deep learning models trained on thousands of MRI scans, we estimate “brain age” as a personalized biomarker of neural health. These measures reveal that accelerated brain aging predicts a higher risk of progression from normal cognition to impairment, whereas slower brain aging confers resilience. Regional brain aging patterns, identified through interpretable AI, further distinguish those at risk for Alzheimer’s disease and related dementias. We also integrate multimodal data to examine how chronic conditions—such as cardiovascular disease, metabolic disorders, and traumatic brain injury—as well as women’s health factors like menopause and reproductive history, shape the trajectory of brain aging. This work illustrates how AI-driven neuroimaging analytics can inform individualized risk stratification, preventive strategies, and ultimately precision aging research.

Presenter

Andrei Irimia

Andrei Irimia, PhD, University of Southern California

Andrei Irimia, PhD is an associate professor in the Leonard Davis School of Gerontology at the University of Southern California, with courtesy appointments in biomedical engineering and quantitative biology. His research focuses on brain aging, traumatic brain injury, and Alzheimer’s disease, using advanced neuroimaging and quantitative methods to understand individual variability in aging trajectories and dementia risk. Dr. Irimia leads several NIH-funded studies examining how chronic disease variables and women's health factors influence brain aging and neurodegeneration. His work bridges population neuroscience and clinical neurology, with the goal of improving early detection and stratification of patients at risk for cognitive decline.

3:30 - 4:00 pm Break
4:00 - 5:00 pm
Focus Demo: Pegasus Workflow Management System - Karan Vahi and Mats Rynge, University of Southern California

Abstract

Pegasus WMS (Workflow Management System) streamlines the execution of complex AI and machine learning workloads by automating the end-to-end pipeline from data ingestion to model evaluation. Through ACCESS Pegasus, researchers can utilize a hosted workflow environment that simplifies the orchestration of jobs across distributed national cyberinfrastructure. This platform allows users to leverage pre-configured Jupyter Notebook examples and the Pegasus Python API to design reproducible AI workflows.

To optimize the use of specialized hardware, Pegasus utilizes glideins (pilot jobs) to provide a unified overlay over GPU resources. This abstraction layer allows the workflow manager to treat diverse, distributed compute nodes as a single, coherent pool of resources. By deploying these pilot jobs, Pegasus can dynamically provision and manage high-performance GPU environments, enabling AI workloads to scale across multiple clusters while maintaining consistent performance and reducing the overhead typically associated with manual resource allocation.

5:00 - 5:05 pm Closing Remarks - Ewa Deelman, University of Southern California
5:05 - 7:00 pm Social Mixer / Poster Session

Posters

ACOSUS - An AI-driven Counseling System for Transfer Students - Sherrene Bogle, Cal Poly Humboldt

Are We Leaving Non-Latin Scripts and Languages Behind?

The vast majority of NLP research and Large Language Models (LLMs) are focused on high-resource languages, predominantly those using the Latin script (e.g., English, French). This creates a critical gap, leading to performance disparities, systemic biases, and the exclusion of billions of speakers from the benefits of advanced AI. The disproportionate focus on Latin scripts means biases and harms are often not adequately measured or addressed for non-Latin script users. Future work must be linguistically informed and strategically address the resource and structural gaps in order to bridge the gap, so that AI can serve billions of people more safely. In my poster presentation, I am actively seeking bilingual collaborators who are proficient in English and another language/script (especially those non-Latin scripts like Arabic, Hindi, Korean, Japanese, or others) to work on practical tools and research in this area.

Advancing NLP for Non-Latin Scripts and Languages - Adrianna Tan, Future Ethics

Are We Leaving Non-Latin Scripts and Languages Behind?

The vast majority of NLP research and Large Language Models (LLMs) are focused on high-resource languages, predominantly those using the Latin script (e.g., English, French). This creates a critical gap, leading to performance disparities, systemic biases, and the exclusion of billions of speakers from the benefits of advanced AI. The disproportionate focus on Latin scripts means biases and harms are often not adequately measured or addressed for non-Latin script users. Future work must be linguistically informed and strategically address the resource and structural gaps in order to bridge the gap, so that AI can serve billions of people more safely. In my poster presentation, I am actively seeking bilingual collaborators who are proficient in English and another language/script (especially those non-Latin scripts like Arabic, Hindi, Korean, Japanese, or others) to work on practical tools and research in this area.

AI-Driven Molecular Structure Determination from Ultrafast X-ray Scattering - Roya Moghaddasi Fereidani, University of California San Diego

Understanding molecular structure and dynamics in real time is one of the grand challenges of modern physical chemistry. My research integrates artificial intelligence with quantum molecular simulations to reconstruct molecular structures directly from ultrafast x-ray scattering patterns. While forward simulations of x-ray scattering from known geometries are well established, solving the inverse problem—inferring atomic configurations from measured patterns—remains highly challenging. To address this, I am developing supervised machine-learning models, including convolutional and graph neural networks, trained on first-principles simulations to learn the mapping between scattering patterns and molecular geometries. Once trained, these models can rapidly predict transient molecular structures and distinguish between competing reaction pathways, providing an efficient alternative to traditional ab initio molecular dynamics. This AI-driven framework aims to accelerate the creation of molecular “movies” at femtosecond timescales, opening new possibilities for understanding and controlling photochemical reactions.

Autonomous Self-Healing Memory Systems for Energy-Efficient and Reliable Computing - Marjan Asadinia, California State university, Northridge

Emerging non-volatile memory technologies such as Phase-Change Memory (PCM) offer high density and scalability, but they face critical challenges related to high write energy, long write latency, and limited endurance caused by frequent bit transitions and write-disturbance errors. These limitations motivate the development of self-healing memory systems that can autonomously adapt to workload behavior and mitigate reliability degradation over time. This work presents a machine learning–driven self-healing memory framework that combines adaptive write optimization with proactive error prediction. By analyzing data patterns and write characteristics, the system intelligently reduces unnecessary bit transitions during write operations, leading to lower energy consumption and improved memory lifetime. In parallel, learning-based error prediction models are used to identify error-prone memory regions before failures occur, enabling early intervention through selective rewriting, remapping, or correction.The proposed approach allows the memory system to continuously monitor its state and dynamically adjust its behavior in response to evolving error patterns and workload demands. Experimental evaluation using full-system, cycle-accurate simulation demonstrates notable reductions in write energy and error rates with minimal performance overhead. These results illustrate how integrating machine learning into memory management enables resilient, efficient, and autonomous self-healing behavior for future memory systems.

Deep Learning for Gene-Environment Interaction Analysis of Complex Traits - Jessica George, California State university, Northridge

Many complex traits and diseases arise from the interactions between genetics factors and environmental exposures, commonly referred to as gene-environment (G×E) interactions. Accurately modeling these effects is important for predicting individual risk and understanding sources of trait variability, but it remains challenging due to nonlinear effects and high-dimensional feature spaces. Traditional regression-based approaches typically require interactions to be specified in advance, limiting their ability to capture complex relationships. We present a deep learning (DL) approach for predictive modeling of G×E effects that explicitly learns nonlinear 2-way and higher-order interactions directly from data, including genotype dominance effects (i.e., non-additive genetic contributions). The proposed model is a feed-forward, fully-connected neural network that takes genetic and environmental features as inputs and predicts a single outcome, including a quantitative trait, disease status, or survival phenotype. We benchmark this approach against widely used statistical and machine learning methods, including linear and penalized (LASSO, elastic-net) regression, random forest, gradient boosting (LightGBM), and a tabular prior-data fitted network (TabPFN, an alternative DL approach based on a pre-trained foundation model). Using a controlled simulation study with 100 replicated datasets of 10,000 individuals, all models were fit using main effects only, with genetic variables coded additively and no interaction terms provided. Linear regression was additionally fit under a “gold standard” specification that included main effects, G×E interactions, and appropriate dominance modeling, serving as a reference upper bound on achievable performance. Prediction accuracy was evaluated using R2 across increasing levels of interaction complexity. Under the main-effects-only specification, regression-based models achieved limited predictive performance (R2 ≈ <0.01-0.20), particularly as interaction complexity increased. In contrast, DL and boosting models achieved substantially higher R2 values in moderate-to-high complexity settings (DL: R2 ≈ 0.21-0.28; boosting: R2 ≈ 0.20-0.27), reflecting their ability to learn nonlinear and interaction-driven signal. TabPFN achieved the highest predictive performance across all complexity levels (R2 ≈ 0.16-0.30), consistently outperforming both regression-based and alternative machine learning approaches. As expected, the gold standard linear regression model yielded the highest overall R2, providing an upper bound on attainable performance. These results demonstrate the advantages of modern machine learning approaches for prediction in settings dominated by complex relationships. Ongoing work extends these methods to real-world genomic datasets to assess scalability, robustness, and practical impact.

Hierarchical Semantic Memory Transformer (H2MT) - Maryam Haghifam, University of California, Los Angeles

Transformer-based large language models (LLMs) are used in language processing, yet when handling long context most often restrict the context window. Furthermore, many existing solutions are inefficient and overlook the structure inherent to documents. As a result, long-context models often treat text as a flat token stream, which obscures hierarchy and wastes computation by processing both relevant and irrelevant context. We present Hierarchical Semantic Memory Transformer (H2MT), a semantic hierarchy-aware approach that attaches to a backbone model. H2MT represents a document as a tree and performs level-conditioned routing and aggregation. It first propagates memory embeddings (summary vectors produced by the backbone) upward. Thus, child-node memory embeddings are injected into their ancestors to preserve relative context. Finally, the model applies cross-level attention to retrieve related information. H2MT improves quality at similar model size while reducing long-range attention compute and memory. The approach is most helpful for data with a semantic hierarchy that can be modeled as a tree. It uses less memory and fewer parameters.

Mechanistic Insights into CO₂ Hydrogenation to Methanol over Inverse ZrO₂/Cu Catalysts - Zihan Yang, University of California, Los Angeles

Inverse ZrO2/Cu shows extraordinary catalytic performance converting CO2 to methanol, yet uncertainties still exist in the reaction mechanism. While conventional Cu/ZrO₂ systems often exhibit rate determining step at the formate hydrogenation, evidence for inverse ZrO₂/Cu catalysts has been conflicting. In this work, we employ density functional theory (DFT) calculations to investigate CO₂ hydrogenation reaction across an ensemble of inverse ZrO₂/Cu configurations under reaction conditions. Detailed reaction-pathway analysis reveals that all the studied inverse structures display the rate-determining step after methoxy formation, typically in hydrogenation to methanol or subsequent water formation, rather than at formate hydrogenation. Structural sensitivity is pronounced, only 19% of the catalyst ensemble are catalytically active across the full pathway, with reactivity favored by partially reduced Zr clusters and reactive site near metallic Cu surface that enhances hydrogen dissociation. Simulated reaction mechanism aligns qualitatively and quantitatively with experimental trends, supporting the view that the inverse configuration mitigates formate stabilization and shifts the reaction kinetic bottleneck to later steps in the mechanism, after formation of methoxy intermediate. These findings clarify the mechanistic origins of activity in inverse ZrO₂/Cu catalysts and highlight the importance of structural ensembles in governing CO₂ hydrogenation performance.

The Year of AI: Raising Campus Awareness Through Art, Exhibits, and Community Engagement - Essraa Nawa, Chapman University

This poster highlights the Leatherby Libraries’ leadership in advancing AI literacy through creative, inclusive, and interdisciplinary approaches. As part of Chapman University’s “Year of AI,” the library launched initiatives such as Beyond the Lens and AI: The Next Chapter, blending art, ethics, and education to inspire campus-wide engagement. Through collaboration with IS&T, Town & Gown, and academic departments, the library positioned itself as a hub for ethical dialogue and innovation. The poster shares replicable models for how libraries can foster AI awareness through community partnerships, exhibitions, and experiential learning.

Too Smart to be Human: Can AI Agents Replace Us in Behavioral Experiments? - John Garcia, California Lutheran University

Can AI replace human subjects? Researchers are increasingly using models such as GPT-4 as surrogates for humans because they are cheaper and faster; however, do they behave like us? To find out, I built 96 AI "retail investors" and unleashed them in a stock market simulation, exposing them to viral "meme stock" buzz while holding financial fundamentals constant. The results were striking: When human retail investors see viral hype, they buy (+30–50%); my AI retail investor agents did the opposite, decreasing buying by 45%. While humans famously hold on to losing investments for too long, my agents sold losers three times faster than they sold winners. They acted exactly like financial textbooks say we should, and exactly unlike real people do. I call this "Hyper-Rationality." AI models are trained on vast amounts of advice: "avoid bubbles," "cut your losses." They prioritize logical training over character instruction; even when explicitly programmed to experience "FOMO," they calculated the transaction costs and rationally refrained from trading. The implication: AI can simulate how we should behave, but it lacks the emotional software to replicate how we actually behave.

Acknowledgements

This workshop is funded by the ACCESS program through National Science Foudnation Grants 2138286.