November 13, 2025

O-1 Judging Criterion for AI ML: Prove Expert Status 2025

Complete guide for AI and ML experts to prove O-1 judging criterion. Learn documentation strategies for code reviews, paper reviews, and technical evaluation.

!
Key Takeaways About the O-1 Judging Criterion:
  • »
    O-1 judging criterion AI ML experts can satisfy through peer review of papers, code reviews, hackathon judging, or evaluation of grant proposals.
  • »
    Proving judging work O-1 requires documentation including reviewer invitations, review assignments, decision letters, and evidence of your evaluation authority.
  • »
    AI expert O-1 evidence for judging includes conference program committee service, journal reviewer status, and participation in technical standards bodies.
  • »
    ML professional judging criterion can be proven through hiring interview participation, technical screening for recruiting, or mentorship evaluation roles.
  • »
    Code review O-1 evidence involves documenting pull request reviews, technical design reviews, and architecture decision participation for major projects.
  • »
    Paper review documentation O-1 requires copies of reviewer invitations, review completion confirmations, and letters from conference chairs or journal editors.
Understanding the Judging Criterion

The O-1 judging criterion AI ML professionals need to understand requires demonstrating you've been invited to evaluate or judge the work of others in your field. This proves peer recognition - people trust your expertise enough to let you assess quality of technical work. For AI and ML professionals, judging opportunities exist in many forms beyond traditional academic peer review. Code reviews, design reviews, hiring evaluations, and mentorship feedback all demonstrate judging capacity in different contexts.

USCIS looks for evidence that you were selected to judge based on your expertise and reputation. Random opportunities don't count as strongly as selective invitations. Being asked to review papers for top-tier conferences like NeurIPS, ICML, or CVPR carries more weight than reviewing for unknown venues. Serving on program committees for respected conferences or workshops demonstrates even higher recognition. The key is showing you were chosen because of acknowledged expertise, not just random selection.

The judging criterion doesn't require formal academic roles. Industry professionals who never review academic papers can still meet this criterion through other judging activities. Reviewing code for major open source projects, evaluating technical proposals at your company, interviewing senior engineer candidates, or judging hackathons and pitch competitions all demonstrate evaluation of others' work. The variety of judging opportunities means most AI ML professionals can find ways to satisfy this criterion with proper documentation.

Wondering if your judging activities qualify for O-1? Beyond Border evaluates AI and ML professionals' experiences and identifies qualifying evidence.

Academic Paper Reviews

Paper review documentation O-1 provides some of the strongest evidence for the judging criterion. Conference and journal review requests come from program chairs or editors who recognize your expertise. Save all reviewer invitation emails. These emails typically state the conference or journal name, paper title, and deadline. They prove you were selected to review based on your technical knowledge in specific AI or ML subfields.

Document your review completion through multiple sources. Request confirmation letters from conference chairs or journal editors stating you completed reviews. These letters should specify how many papers you reviewed, for which venue, and during what time period. Some conferences send thank-you emails after review completion - save these as evidence. Login screenshots from conference management systems showing your completed reviews also help, though letters from organizers carry more weight with immigration officers.

Explain your review responsibilities clearly in your petition narrative. Describe the review process - reading papers, evaluating novelty and technical merit, providing detailed feedback, and making accept/reject recommendations. Note if you participated in program committee discussions about borderline papers. Higher levels of review responsibility like serving as area chair or senior program committee member demonstrate increased recognition. Area chairs typically manage multiple reviewers and make final decisions on paper acceptance.

Need help documenting your academic peer review work? Beyond Border helps AI professionals organize paper review evidence for maximum impact.

Code Review and Technical Evaluation

Code review O-1 evidence from industry work can satisfy the judging criterion effectively. AI and ML engineers regularly review code in pull requests, evaluate architectural designs, and assess technical proposals. These activities demonstrate you're trusted to judge the quality and correctness of others' technical work. Document your code review activities through GitHub pull request reviews, internal code review system records, and letters from managers or tech leads.

For open source projects, your public GitHub history provides evidence. Screenshot or export records of pull requests you've reviewed showing your comments, feedback, and approval or rejection decisions. If you review code for major projects used by thousands of developers, emphasize the impact and reach of your judging work. Reviewing code for TensorFlow, PyTorch, scikit-learn, or other widely-adopted ML libraries demonstrates you're trusted by the open source community to evaluate contributions to critical infrastructure.

Internal corporate code reviews require different documentation since they're not publicly visible. Request letters from engineering managers or tech leads describing your code review responsibilities. Letters should explain that you were selected to review code based on your technical expertise, specify the types of systems or components you review, and note the impact of work you evaluate. If you review code for production ML systems serving millions of users, highlight the significance and responsibility of your judging role.

Want to document code review work for your O-1 case? Beyond Border helps tech professionals frame technical evaluation as strong judging evidence.

How Do I Prove a Valid Entry if I Lost the Passport That Had My Original Visa?
Grant Reviews and Funding Evaluation

Proving judging work O-1 through grant review participation carries significant weight. Government agencies like NSF, NIH, or DARPA invite recognized experts to evaluate research proposals. Companies running fellowship or grant programs similarly invite technical experts to assess applications. These invitations prove institutional recognition of your expertise in evaluating research quality, feasibility, and potential impact.

Document grant review work through invitation letters from funding agencies or program administrators. Letters should state the program name, your role as reviewer or panelist, and the selection process that led to your invitation. If you reviewed proposals for NSF CAREER awards or DARPA programs, emphasize the prestige and competitiveness of these funding opportunities. Reviewing proposals where only 10-15 percent get funded demonstrates you're helping make high-stakes decisions about research directions.

Corporate fellowship programs like Google PhD Fellowship, Facebook Fellowship, or Amazon Fellowship provide similar opportunities. These programs invite senior researchers and engineers to evaluate PhD student applications. Save all invitation emails and reviewer portal access confirmations. Request letters from program coordinators confirming your participation and the number of applications you reviewed. The selectivity of these fellowships and your role in choosing recipients demonstrates clear judging capacity recognized by major tech companies at USCIS.

Involved in grant or fellowship reviews? Beyond Border helps document funding evaluation work for O-1 applications.

Hiring and Interview Participation

AI expert O-1 evidence can include participation in hiring and candidate evaluation. Companies invite senior engineers and researchers to interview candidates for technical positions. Your participation demonstrates you're trusted to judge whether candidates meet hiring standards. This evaluation of others' technical abilities and qualifications satisfies the judging criterion when documented properly.

Focus on interviewing senior or specialized candidates rather than entry-level positions. Interviewing senior ML engineers, research scientists, or technical leads shows you're evaluating expertise comparable to or exceeding your own level. Screen sharing coding interviews, technical deep dives, or system design discussions all demonstrate judgment of others' technical abilities. Request letters from recruiters or hiring managers stating you were selected as interviewer based on technical expertise in AI or ML.

Document your interview participation through calendar invites, interview feedback forms, and hiring decision records. If you participated in hiring decisions for high-impact roles, mention the seniority and responsibility of positions. Interviewing candidates for ML infrastructure teams, AI research groups, or technical leadership roles demonstrates higher-level evaluation than interviewing junior engineers. Your role in selecting team members proves you're trusted to judge technical quality at USCIS.

Want to use hiring participation as O-1 judging evidence? Beyond Border helps AI professionals document interview and evaluation work effectively.

Other Judging Opportunities

ML professional judging criterion can be satisfied through diverse activities beyond paper reviews and code reviews. Serving as a hackathon judge demonstrates evaluation of others' projects and technical implementations. Judging startup pitch competitions shows assessment of business models, technical feasibility, and market potential. Evaluating capstone projects for university programs proves recognition as an expert who can assess student work. Each activity demonstrates you're selected to judge based on expertise.

Mentorship evaluation provides another angle for proving judging criterion. If you mentor junior engineers or PhD students and evaluate their progress, document this relationship. Letters from mentees describing how you assessed their work, provided feedback, and helped them improve demonstrate judging capacity. Formal mentorship programs through professional organizations or company initiatives carry more weight than informal mentoring relationships.

Technical standards development involves judging proposed specifications and implementations. If you participate in working groups for ML frameworks, model formats, or deployment standards, you're evaluating technical proposals from others. Document your participation in standards bodies through membership confirmations, meeting attendance records, and voting records on proposals. Standards work proves you influence technical decisions at industry level beyond your immediate team or company.

Looking for creative ways to prove judging criterion? Beyond Border identifies overlooked judging opportunities in your background and helps document them effectively.

FAQ

Can code reviews count as judging criterion for O-1 visa? Yes, code reviews for open source projects or internal corporate systems demonstrate judging when documented through GitHub records, manager letters, and evidence of your technical evaluation authority.

Do I need to review academic papers to meet O-1 judging criterion? No, AI and ML professionals can satisfy judging criterion through code reviews, hiring interviews, hackathon judging, grant reviews, or other technical evaluation activities beyond academic peer review.

How many paper reviews do I need for O-1 judging criterion? While no specific number is required, reviewing 10-15 papers for recognized conferences or journals over 2-3 years typically provides sufficient evidence when properly documented with invitation letters and confirmations.

Can mentorship count toward O-1 judging criterion for tech professionals? Yes, formal mentorship where you evaluate mentees' work progress and provide feedback can contribute to judging criterion, especially when documented through program participation and mentee testimonials.

We’ve handled this before. We’ll help you handle it now.

Let Beyond Border help you apply lessons from the past to tackle today’s challenges with confidence.

Progress Image

Struggling with your U.S. visa process? We can help.

Other blogs