The Intelligence Community (IC) Postdoctoral Research Fellowship Program offers scientists and engineers from a wide variety of disciplines unique opportunities to conduct research in a range of topics relevant to the Intelligence Community. The research is conducted by postdocs, while in partnership with a research advisor and collaborating with an advisor from the Intelligence Community.
Postdoc Eligibility Details:
U.S. citizenship required
Ph.D. in a relevant field must be completed before beginning the appointment and within five years of the appointment start date.
Proposal must be associated with an accredited U.S. university, college or U.S. government laboratory.
Supports ambitious, high-impact scientific research that uses AI as a core part of the solution. While the call emphasizes health and climate, the program remains open to exceptional projects in other domains with strong alignment and measurable impact.
Focus Areas Aligned to CIST
AI-enabled scientific discovery
Foundation models, agents, and open datasets for research
Applied machine learning for health, environmental, or interdisciplinary science
Scalable research cyberinfrastructure and data workflows
Award Summary
Total initiative: $30M globally
Individual awards: $500K-$3M
Additional support: optional Google.org Accelerator participation, technical support, and Google Cloud credits
This opportunity is for academic institutions, nonprofits, and social enterprises partnering with governments to propose generative or agentic AI projects that improve public services. This is a particularly strong match for CIST teams working at the intersection of civic technology, AI, data systems, and public-sector innovation.
Focus Areas Aligned to CIST
AI for public-service delivery
Decision support for public infrastructure and community resilience
Government-facing data systems and analytics
Human-centered AI for high-impact service context
Award Summary
Total initiative: $30M globally
Individual awards: $1M-$3M
Additional support: Google.org Accelerator, pro bono technical support, and cloud credits
Register to attend: https://schmidtentities.zoom.us/webinar/register/
Deadline: May 17, 2026
Supports technical research that improves our ability to understand, predict, and control risks from frontier AI systems while enabling trustworthy deployment.
Tier I: A focused, technically rigorous trustworthy-AI research project at a smaller funding scale.
Tier II: A more ambitious, high-impact, field-shaping project at larger scale, with stronger expectations around depth, focus, and PI commitment.
Either or both tiers can be applied for. The scientific goals are the same across both tiers. Both tiers are meant to advance Schmidt’s trustworthy AI research agenda, especially around:
Characterizing and forecasting misalignment in frontier AI systems.
Developing generalizable measurements and interventions with credible validity.
Overseeing superhuman AI systems and addressing multi-agent risks.
Focus Areas Aligned to CIST
Trustworthy AI
AI evaluation and measurement
AI safety, oversight, and control
Multi-agent systems and frontier-model risk analysis
Award Summary
Tier 1: up to $1M for 1-3 years
Tier 2: $1M-$5M+ for 1-3 years
Federal Funding Opportunities (Time Sensitive)
Department of Energy (DOE) DE-FOA-0003612
#AI&ACS #HCC #BIO&HEALTH-INFO
Informational Webinar: Mar 26, 2026
Register to attend: https://science-doe.zoomgov.com/webinar/register/WN_cByyhWASR72Do7yIDpe3_g#/registration
Deadlines:
Phase I application due: April 28, 2026
Phase II LOI due: April 28, 2026 (strongly encouraged)
This opportunity supports interdisciplinary teams using novel AI models and frameworks to accelerate scientific discovery and R&D workflows across DOE-relevant challenge areas. Topics include advanced manufacturing, biotechnology, critical materials, nuclear fission, nuclear fusion, quantum information science, semiconductors and microelectronics, discovery science, and energy.
This is a two-phase AI-for-science/AI-for-energy opportunity focused on building interdisciplinary teams that can demonstrate meaningful “AI advantage” in research workflows. Phase I is a smaller, shorter effort intended to demonstrate a concrete workflow and quantify why the approach merits further investment.Phase II is the larger effort for teams pursuing the most promising directions, with substantially more effort and budget. Phase I projects are anticipated to run about 9 months; Phase II projects about 3 years. Phase I is not required to apply for Phase II projects.
Eligibility/Team notes
This is a multi-institutional team opportunity.
Phase I: team must include partners from at least two of these categories: DOE/NNSA National Laboratory or Scientific User Facility; Industry; IHE/Non-profit/Other.
Phase II: team must include at least one institution from DOE/NNSA National Laboratory/Scientific User Facility and Industry; IHE/Non-profit/Other partners are strongly encouraged but not required.
Consortium membership is not required to apply or receive funding.
Deadlines: Abstract (recommended) March 2, 2026; Proposal April 10, 2026
DARPA is seeking innovative basic or applied research to create high-assurance “AI systems of systems” by developing a theory-driven architectural foundation for hierarchical composition of Machine Learning (ML) and Automated Reasoning/Knowledge Representation & Reasoning (AR/KR&R) subsystems. Emphasis is on verifiability and strong explainability grounded in automated logical proofs and reusable logic “building blocks,” with solutions that remain computationally scalable (and not just incremental improvements).
Two Technical Areas:
TA1 (primary): Develop new high-assurance ML/AR approaches—tightly coupling AR + ML, with theory/algorithms, open-source implementations, scalability, and rapid adaptation to new data (including human-editable knowledge updates).
TA2: Build a software composition/integration library (APIs, interfaces, common data formats, end-to-end explanatioed TA1 tools; TA2 emphasizes integration expertise and interoperability across diverse performer software stacks.
Focus Areas Aligned to CIST
Trustworthy AI
Knowledge representation & automated reasoning
Explainable/interpretable AI
Neuro-symbolic AI
Scalable AI systems engineering
Award Summary
Mechanism: DARPA Other Transaction (OT) – Prototype
Total funding: up to $2.0M across both phases
Period: up to 24 months (Phase 1 base 15 months + Phase 2 option 9 months)
Rolling or Recurring Federal Funding Opportunities
Proposal Target Dates (no hard deadline): February 05, 2026, First Thursday in February, annually; Second Thursday in September, annually
Future CoRe is the main umbrella for CISE “core” research. It covers algorithms, systems, networks, AI, HCI, data/ML, cyber-physical systems, and foundations of emerging technologies—essentially the full IS&T research spectrum.
Proposal Target Dates (no hard deadline): Jan 26, 2026, Last Monday in January, annually; Last Monday in September, annually
Flagship program for cybersecurity, privacy, resilience, and “trust in cyberspace,” spanning computing, AI, social/behavioral, mathematics, and education. Ideal for work in security, usable privacy, trustworthy AI, and socio-technical security.
Proposal Deadlines: May 4, 2026, First Monday in May, annually
Foundational methods for biomedical/healthcare digital twins and synthetic data/synthetic humans, emphasizing interdisciplinary mathematical and engineering foundations.
Proposal Deadline: Fourth Wednesday in July, annually
Early-career faculty submit CAREER proposals to a disciplinary program (e.g., a Future CoRe subprogram like SHF or HCC, or SaTC 2.0).
CAREER proposals are evaluated within those core programs but under the CAREER expectations (integrated research and education plan, career development trajectory).
Please send any leads you think should be included in MavOps!
The University of Nebraska does not discriminate based on race, color, ethnicity, national origin, sex, pregnancy, sexual orientation, gender identity, religion, disability, age, genetic information, veteran status, marital status, and/or political affiliation in its education programs or activities, including admissions and employment. The University prohibits any form of retaliation taken against anyone for reporting discrimination, harassment, or retaliation for otherwise engaging in protected activity.