This year, I had the wonderful opportunity to participate in two consecutive academic conferences: ICLR in Singapore from April 24-27, and CHI in Yokohama, Japan from April 28 to May 1.
First, I must humbly clarify that I didn't have a full paper at either conference. I mainly went to participate in workshops and present posters. So rather than a participant, I was more like an observer. I'd like to share my observations on the differences in content style and my feelings at these two top conferences in different fields.
Direct Impressions - Economic Value vs. Emotional Value
ICLR is like a large, sparse network, with each venue hosting thousands of people. As an attendee, it's impossible to see all the content you're interested in within the limited time. Viewing posters and listening to talks at ICLR gave me a strong sense of being an NPC—there were too many people. The proportion of introverts among presenters seems very high. Some students became visibly uncomfortable after presenting their posters and couldn't wait to leave.
In comparison, CHI is smaller and more beautiful, with no large venues. Each talk is in a small room with dozens of people, offering strong engagement and interaction. The proportion of extroverts at CHI is noticeably higher—people can chat about anything and everything. Many students even brought souvenirs from their schools or labs to exchange, which was very warm. (What's the difference between a comic con?)
From an economic value and return perspective, the number and level of sponsors at ICLR compared to CHI is like heaven and earth. The company booths at ICLR are very high quality. As an ordinary student "working on agents," I could walk around ICLR and collect countless HR contacts, hearing "we have a team working on this direction, send us your resume" over and over. Of course, whether you can actually match is another matter, but at least the AI field provides many publicly accessible entry points.
Such entry points are scarce at CHI. Although CHI's sponsorship this year was already very good compared to previous years—besides the regulars Apple, Google, Meta, and Microsoft, many Japanese companies sponsored generously (NTT, Sony, Yokohama City Government)—the total number is still small. Domestic tech companies are even rarer; so far I've only seen Huawei's booth. Similarly, as an ordinary student "working on agents," after visiting every booth, I gained nothing beyond some polite socializing. I even wondered, do these companies really hire people? Or is this just an advertising space they bought?
This isn't surprising, since HCI itself is very un-profitable. The more minority and niche your focus, the more it deserves recognition. Correspondingly, there are fewer HCI industry positions, few tech companies direct their promotional attention to this field—only tech giants that aren't short on money will spend this advertising fee.
What kind of research should we do? A SOTA competition or blind night running?
Homogenization and Imagination
AI research content seems to homogenize easily. Looking at agent-related research, you can almost paint a picture of everyone's life: multi-agents playing board games, agents playing bridge, agent game theory, AI scientists (The research topic you do reflects your life state...)
In contrast, CHI topics: meditation assistants, outdoor assistants, AI mental therapy, AI-assisted creation. Sometimes reading HCI papers feels like—this can also be researched?
The Meaningless Spiral
Any research involving human value is difficult to evaluate—you can hardly say that research on visually impaired groups is deeper than research on depressed groups, or that research solving elderly companionship is more meaningful than childhood autism.
But for technology-oriented papers, comparison is necessary. In fact, because comparison is so direct, every AI researcher must painfully face the eternal question of "is my method better than others," "if I can't beat others, is my research garbage."
The "difficult to evaluate" point is reflected in both sides' evaluation/award mechanisms. CHI has 50 best papers a year and 250 best nominations. ICLR has only 3 best papers and 3 best nominations. Even if you're in the top 1% of ICLR research, you can only get an oral opportunity; but at CHI, all accepted full papers can be oral (100%).
But Where Should Humanity Go
Augmentation or Replacement?
Should AI augment or replace humans? In slogans, everyone can reach a moderate consensus, but in terms of path, the two fields are making completely different attempts: The AI field says, of course we should replace. The HCI field says, of course we should augment.
Although everyone is still trying, whether AI ultimately replaces or augments may be related to the task. If it's a task that can directly create economic value in the market, it will inevitably move toward "AI replacement" (such as AI programming, AI data analysis, AI customer service). Otherwise, at least in the short term, you can focus on "AI augmentation" (such as emotional companionship, parent-child relationships, education).
AI is neither necessarily auxiliary nor necessarily replacement. Staying in one field for too long creates some mental inertia, treating results as objectives.
Off-cut
On the first day of ICLR, Professor Zhu Songchun's "tong tong tong (通通通)" and "AGI" was indeed eye-opening. But he mentioned something: "Many humanities and social science scholars have a special obsession with human intelligence value, thinking that humans are different from other creatures." He found this idea absurd: all natural organisms, once born, can be simulated.
I know that in CHI and the broader social science field, "human value" is an inviolable issue. This is not an objective fact, but a stance—because we are carbon-based human beings, we must put human value and uniqueness first. Nevertheless, this human-centered idea is not universally recognized. The faster technology develops in a field, the less people treat humans as unique, but rather as objects that can be statistically calculated and simulated. I vaguely feel that Human-centered and technology development are not mutually reinforcing, but mutually constraining. If you care too much about Human-centered, then don't pursue technology development.