Yi Chen - Google | LinkedIn (2025)

Table of Contents
Stanford, California, United States 1K followers 500+ connections View mutual connections with Yi Welcome back About Welcome back Activity I am truly humbled and honored to receive this award. It means so much to me that you have taken the time to review my journey at CloudAEye. I would… The SF Bay Area already has 66 AI events (and 5 AI Hackathons) lined up for Jan, and this is prob one of the slowest months for the industry. Data… Looks like there is some more drama around OpenAI's o3 demo, this time around FrontierMath benchmark.Let's look at this issue:1. OpenAI has… Experience & Education Google ******** ***** ****** ******* ****** ********** *** ********** ** **** **** View Yi’s full experience See their title, tenure and more. Welcome back Licenses & Certifications CFA LEVEL I Languages English Chinese Recommendations received More activity by Yi 🚀🚀 Upcoming Founder & VC Events in SF by Founders Bay sponsored by TheAgenticJOIN OUR BIGGEST EVENT YET ft. Sequoia, a16z, khosla:… Product market fit is not about how many peoplearebuying your product, it's about how many people are using it. Jaclyn Zhuang shared her insights… “Placing the blame or judgment on someone else leaves you powerless to change your experience; taking responsibility for your beliefs and judgments… Ask Lvlup.ai anything! A few weeks ago, the amazing Xoogler.co network gathered at Singapore's Startup space BLOCK71 Global to explore how leading AI companies are tackling… This Reel had me laughing out loud. 😂Seeing these raw, human moments, captured on Marco Polo, reminds me of the beauty of what we’ve built.Marco… Buckle up - because I’ve got HUGE news!I pitched at the Startup Showcase during World Summit AI MENA, and AQ22 walked away with FIRST PLACE! 🥇It… HUGE immigration news!The DoS has confirmed that it will formally establish an H-1B renewal program within the US.A huge relief for so many… “Have your agent talk to my agent, and they’ll figure it out.” Within the next five years, agents will transform the way we collaborate, and improve… 🚀 Kickstart 2025 with GenAI Pioneers! Join us for an exclusive event that brings together top AI leaders, researchers, and investors shaping AI's… 🚀 Founder Dinner: From Google DeepMind to AI Innovation 🌟Join Us for an Intimate Evening with Bespoke Labs (Raised $7.25M Seed) Founder & Turing… 💥 "I could have died that day." - Nancy ZhengPlease check out the interview clip at https://lnkd.in/gd4DYHW5 That thought hit me hard as I lay… 🎉 Collov AI was thrilled to join the #AIAgent discussion with Echo Zhong, CPA of Tax from Kick (funded by OpenAI), and the team from Omnify Labs… I just sent this to the whole team but I think it applies to all startups. 40 episodes of Founders in Arms in, I'd say 4 things about doing a podcast:1️⃣ It's a lot of time and energy!2️⃣ The ROI is questionable :)… Building crypto infrastructure means navigating two worlds: technology and regulation.On this week's Founders in Arms, Rajat Suri and I sat down… View Yi’s full profile Sign in Other similar profiles Explore more posts Explore collaborative articles Others named Yi Chen in United States Add new skills with these courses

Yi Chen - Google | LinkedIn (1)

Yi Chen - Google | LinkedIn (2)

Stanford, California, United States

1K followers 500+ connections

Yi Chen - Google | LinkedIn (3) Yi Chen - Google | LinkedIn (4) Yi Chen - Google | LinkedIn (5)

Yi Chen - Google | LinkedIn (6)

View mutual connections with Yi

Welcome back

By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.

New to LinkedIn? Join now

or

New to LinkedIn? Join now

By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.

Join to view profile

Google

Peking University

  • Report this profile

About

Data Scientist with 3+ years experience in data analysis, A/B testing, machine learning…

Welcome back

By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.

New to LinkedIn? Join now

Activity

  • I am truly humbled and honored to receive this award. It means so much to me that you have taken the time to review my journey at CloudAEye. I would…

    Yi Chen - Google | LinkedIn (9)

    I am truly humbled and honored to receive this award. It means so much to me that you have taken the time to review my journey at CloudAEye. I would…

    Liked by Yi Chen

  • The SF Bay Area already has 66 AI events (and 5 AI Hackathons) lined up for Jan, and this is prob one of the slowest months for the industry. Data…

    Yi Chen - Google | LinkedIn (10)

    The SF Bay Area already has 66 AI events (and 5 AI Hackathons) lined up for Jan, and this is prob one of the slowest months for the industry. Data…

    Liked by Yi Chen

  • Looks like there is some more drama around OpenAI's o3 demo, this time around FrontierMath benchmark.Let's look at this issue:1. OpenAI has…

    Yi Chen - Google | LinkedIn (11)

    Looks like there is some more drama around OpenAI's o3 demo, this time around FrontierMath benchmark.Let's look at this issue:1. OpenAI has…

    Liked by Yi Chen

Join now to see all activity

Experience & Education

  • Yi Chen - Google | LinkedIn (12)

    Google

    ****** **** *******

  • Yi Chen - Google | LinkedIn (13)

    ********

    **** *******

  • Yi Chen - Google | LinkedIn (14)

    ***** ****** *******

    ********** *********

  • Yi Chen - Google | LinkedIn (15)

    ****** **********

    ******'* ****** *********

  • Yi Chen - Google | LinkedIn (16)

    *** ********** ** **** ****

    ******'* ****** *******, *******

View Yi’s full experience

See their title, tenure and more.

Welcome back

By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.

New to LinkedIn? Join now

or

By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.

Licenses & Certifications

  • Yi Chen - Google | LinkedIn (17)

    CFA LEVEL I

    -

Languages

  • English

    Professional working proficiency

  • Chinese

    Native or bilingual proficiency

Recommendations received

  • I'John Gatewood “Chen Yi gave great insight during our group discussions on economics issues. She worked very hard as a student and teaching assistant always willing to help others during her free time. She will be a great economist and even better friend.”

1 person has recommended Yi

Join now to view

More activity by Yi

  • Yi Chen - Google | LinkedIn (19)

    🚀🚀 Upcoming Founder & VC Events in SF by Founders Bay sponsored by TheAgenticJOIN OUR BIGGEST EVENT YET ft. Sequoia, a16z, khosla:…

    Liked by Yi Chen

  • Product market fit is not about how many peoplearebuying your product, it's about how many people are using it. Jaclyn Zhuang shared her insights…

    Yi Chen - Google | LinkedIn (20)

    Product market fit is not about how many peoplearebuying your product, it's about how many people are using it. Jaclyn Zhuang shared her insights…

    Liked by Yi Chen

  • “Placing the blame or judgment on someone else leaves you powerless to change your experience; taking responsibility for your beliefs and judgments…

    Yi Chen - Google | LinkedIn (21)

    “Placing the blame or judgment on someone else leaves you powerless to change your experience; taking responsibility for your beliefs and judgments…

    Liked by Yi Chen

  • Ask Lvlup.ai anything!

    Yi Chen - Google | LinkedIn (22)

    Ask Lvlup.ai anything!

    Liked by Yi Chen

  • A few weeks ago, the amazing Xoogler.co network gathered at Singapore's Startup space BLOCK71 Global to explore how leading AI companies are tackling…

    Yi Chen - Google | LinkedIn (23)

    A few weeks ago, the amazing Xoogler.co network gathered at Singapore's Startup space BLOCK71 Global to explore how leading AI companies are tackling…

    Liked by Yi Chen

  • This Reel had me laughing out loud. 😂Seeing these raw, human moments, captured on Marco Polo, reminds me of the beauty of what we’ve built.Marco…

    Yi Chen - Google | LinkedIn (24)

    This Reel had me laughing out loud. 😂Seeing these raw, human moments, captured on Marco Polo, reminds me of the beauty of what we’ve built.Marco…

    Liked by Yi Chen

  • Buckle up - because I’ve got HUGE news!I pitched at the Startup Showcase during World Summit AI MENA, and AQ22 walked away with FIRST PLACE! 🥇It…

    Yi Chen - Google | LinkedIn (25)

    Buckle up - because I’ve got HUGE news!I pitched at the Startup Showcase during World Summit AI MENA, and AQ22 walked away with FIRST PLACE! 🥇It…

    Liked by Yi Chen

  • HUGE immigration news!The DoS has confirmed that it will formally establish an H-1B renewal program within the US.A huge relief for so many…

    Yi Chen - Google | LinkedIn (26)

    HUGE immigration news!The DoS has confirmed that it will formally establish an H-1B renewal program within the US.A huge relief for so many…

    Liked by Yi Chen

  • “Have your agent talk to my agent, and they’ll figure it out.” Within the next five years, agents will transform the way we collaborate, and improve…

    Yi Chen - Google | LinkedIn (27)

    “Have your agent talk to my agent, and they’ll figure it out.” Within the next five years, agents will transform the way we collaborate, and improve…

    Liked by Yi Chen

  • 🚀 Kickstart 2025 with GenAI Pioneers! Join us for an exclusive event that brings together top AI leaders, researchers, and investors shaping AI's…

    Yi Chen - Google | LinkedIn (28)

    🚀 Kickstart 2025 with GenAI Pioneers! Join us for an exclusive event that brings together top AI leaders, researchers, and investors shaping AI's…

    Liked by Yi Chen

  • Yi Chen - Google | LinkedIn (29)

    🚀 Founder Dinner: From Google DeepMind to AI Innovation 🌟Join Us for an Intimate Evening with Bespoke Labs (Raised $7.25M Seed) Founder & Turing…

    Liked by Yi Chen

  • 💥 "I could have died that day." - Nancy ZhengPlease check out the interview clip at https://lnkd.in/gd4DYHW5 That thought hit me hard as I lay…

    Yi Chen - Google | LinkedIn (30)

    💥 "I could have died that day." - Nancy ZhengPlease check out the interview clip at https://lnkd.in/gd4DYHW5 That thought hit me hard as I lay…

    Liked by Yi Chen

  • 🎉 Collov AI was thrilled to join the #AIAgent discussion with Echo Zhong, CPA of Tax from Kick (funded by OpenAI), and the team from Omnify Labs…

    Yi Chen - Google | LinkedIn (31)

    🎉 Collov AI was thrilled to join the #AIAgent discussion with Echo Zhong, CPA of Tax from Kick (funded by OpenAI), and the team from Omnify Labs…

    Liked by Yi Chen

  • I just sent this to the whole team but I think it applies to all startups.

    Yi Chen - Google | LinkedIn (32)

    I just sent this to the whole team but I think it applies to all startups.

    Liked by Yi Chen

  • 40 episodes of Founders in Arms in, I'd say 4 things about doing a podcast:1️⃣ It's a lot of time and energy!2️⃣ The ROI is questionable :)…

    Yi Chen - Google | LinkedIn (33)

    40 episodes of Founders in Arms in, I'd say 4 things about doing a podcast:1️⃣ It's a lot of time and energy!2️⃣ The ROI is questionable :)…

    Liked by Yi Chen

  • Building crypto infrastructure means navigating two worlds: technology and regulation.On this week's Founders in Arms, Rajat Suri and I sat down…

    Yi Chen - Google | LinkedIn (34)

    Building crypto infrastructure means navigating two worlds: technology and regulation.On this week's Founders in Arms, Rajat Suri and I sat down…

    Liked by Yi Chen

View Yi’s full profile

  • See who you know in common
  • Get introduced
  • Contact Yi directly
Join to view full profile

Sign in

Stay updated on your professional world

Sign in

By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.

New to LinkedIn? Join now

Other similar profiles

  • Jingyu Z. Data Scientist at Meta San Francisco Bay Area Connect
  • Yaseen Khan Mohmand Cambridge, MA Connect
  • Ken Yu Sunnyvale, CA Connect
  • Mijia Chen 字节跳动 - Data Analyst Shanghai, China Connect
  • Shruti Bhat New York, NY Connect
  • Soham Shelat Chicago, IL Connect
  • Grace Y. Cliffside Park, NJ Connect
  • Bowen Ma San Francisco Bay Area Connect
  • Yash Varotaria Sunnyvale, CA Connect
  • Ellen (Weijing) Lu Beijing, China Connect

Explore more posts

  • Scale AI LLMs have become more capable with better training and data. But they haven’t figured out how to “think” through problems at test-time.The latest research from Scale finds that simply scaling inference compute–meaning, giving models more time or attempts to solve a problem–is not effective because the attempts are not diverse enough from each other. 👉 Enter PlanSearch, a novel method for code generation that searches over high-level "plans" in natural language to encourage response diversity. PlanSearch enables the model to “think” through various strategies before generating code, making it more likely to solve the problem correctly.The Scale team tested PlanSearch on major coding benchmarks (HumanEval+, MBPP+, and LiveCodeBench) and found it consistently outperforms baselines, particularly in extended search scenarios. Overall performance improves by over 16% on LiveCodeBench from 60.6% to 77%. Here’s how it works:✅ PlanSearch first generates high-level strategies, or "plans," in natural language before proceeding to code generation. ✅ These plans are then further broken down into structured observations and solution sketches, allowing for a wider exploration of possible solutions. This increases diversity, reducing the chance of the model recycling similar ideas.✅ These plans are then combined before settling on the final idea and implementing the solution in code.Enabling LLMs to reason more deeply at inference time via search is one of the most exciting directions in AI right now. When PlanSearch is paired with filtering techniques—such as submitting only solutions that pass initial tests—we can get better results overall and achieve the top score of 77% with only 10 submission attempts.Big thanks to all collaborators on this paper including: Evan Wang, Hugh Zhang, Federico Cassano, Catherine Wu, Yunfeng Bai, William Song, Vaskar Nath, Ziwen H., Sean Hendryx, Summer Yue👉 Read the full paper here: arxiv.org/abs/2409.03733 181 2 Comments
  • Intel Corporation Unlock the full potential of your Large Language Models (LLMs) with Intel® Extension for PyTorch (IPEX) and the Intel® LLM Library for PyTorch (IPEX-LLM). Download this whitepaper to explore the optimization of LLMs, including their performance, resource utilization, and response times in real-world applications.Link - https://intel.ly/3BBJ4ey 1
  • Intel Corporation Unlock the full potential of your Large Language Models (LLMs) with Intel® Extension for PyTorch (IPEX) and the Intel® LLM Library for PyTorch (IPEX-LLM). Download this whitepaper to explore the optimization of LLMs, including their performance, resource utilization, and response times in real-world applications.Link - https://intel.ly/4dwlqgQ
  • Intel Corporation Unlock the full potential of your Large Language Models (LLMs) with Intel® Extension for PyTorch (IPEX) and the Intel® LLM Library for PyTorch (IPEX-LLM). Download this whitepaper to explore the optimization of LLMs, including their performance, resource utilization, and response times in real-world applications.Link - https://intel.ly/3BBTlHG
  • Charin Polpanumas More opportunities 12
  • Patryk Binkowski 📢 New Breakthrough in Time Series Forecasting: TSMamba Model 📢Researchers from Tsinghua University and several esteemed institutions have introduced a novel time series foundation model named TSMamba. This model leverages the Mamba architecture to tackle the complexity of time series forecasting across diverse domains. Unlike traditional models reliant on quadratic complexity architectures like Transformers, TSMamba offers a linear complexity alternative, enhancing both efficiency and scalability.Key Highlights:- Bidirectional Encoding: TSMamba incorporates both forward and backward Mamba encoders to capture time dependencies effectively.- Two-Stage Transfer Learning: To reduce the need for extensive data and computational resources, TSMamba utilizes pretrained Mamba models, allowing for effective adaptation to specific time series tasks.- Cross-Channel Attention: A compressed attention module captures dependencies across channels, enhancing predictive accuracy on multivariate datasets.Experiments show that TSMamba performs on par or even better than current state-of-the-art models, especially in scenarios with limited training data, such as zero-shot and full-shot forecasting. This approach could revolutionize forecasting across industries, from finance to healthcare, by offering more data-efficient and adaptable solutions.📚 Read more about the research here: https://lnkd.in/d_k3sfgv #TimeSeries #Forecasting #MachineLearning 52
  • Ali Arsanjani, PhD [research update] use a Transformer's multi-head attention activations to fetch RAG style, to augment multi faceted problem breakdown for LLMs in ."Multi-Head RAG: Solving Multi-Aspect Problems with LLMs":1. Enhances LLMs by integrating document retrieval (RAG) into the LLM context.2. Challenge: Existing RAG solutions struggle with queries needing multiple documents with varied contents due to distant embeddings.3. Multi-Head RAG (MRAG): Introduces a novel approach using Transformer's multi-head attention activations to fetch diverse documents.4. Activation Utilization: Different attention heads capture varied data aspects, improving complex query retrieval.5. Improved Relevance: MRAG shows up to 20% improvement in relevance over standard RAG baselines.6. Evaluation Methodology: Provides metrics, synthetic datasets, and real-world use cases to demonstrate effectiveness.7. Seamless Integration: MRAG can be integrated with existing RAG frameworks and benchmarking tools like RAGAS.8. Multi-Aspect Problem Solving: Designed specifically to handle multi-aspect queries more effectively.9. Data Stores Compatibility: Compatible with various data store classes.10. Potential Applications: Benefits scenarios requiring diverse information retrieval, enhancing LLM responses.ExperimentThe paper introduces MRAG and evaluates it using synthetic datasets and real-world data. It compares MRAG's performance against standard RAG models. The experiment focuses on the model's ability to retrieve relevant documents across various query aspects, demonstrating up to a 20% improvement in relevance over baselines.Use CaseMRAG is tested on real-world scenarios like customer support, where queries often require information from multiple documents with diverse contents. The model's capability to retrieve and integrate multi-aspect data enhances the quality of responses. 81 5 Comments
  • PyTorch
  • George Z. Lin Researchers from Salesforce AI and UIUC have introduced a workflow for Online Iterative Reinforcement Learning from Human Feedback (RLHF), which enhances the performance of large language models compared to traditional offline methods. This approach addresses the challenge of incorporating human feedback in open-source projects by using proxy models constructed from diverse open-source datasets to approximate human feedback, making it more accessible to resource-constrained open source projects.The workflow involves several key components, including reward modeling, iterative policy optimization, practical implementation, and evaluation. Reward modeling constructs preference models using datasets like HH-RLHF and SHP, employing the Bradley-Terry model to approximate human preferences and connect RLHF with reward maximization. Iterative policy optimization continuously updates the policy based on new data, mitigating over-optimization issues associated with finite offline datasets and ensuring robust performance with out-of-distribution data. Practical implementation offers a comprehensive guide for the framework, featuring supervised fine-tuning (SFT) and iterative direct preference learning with a hybrid batch learning approach to balance exploitation and exploration.The trained LLM, SFR-Iterative-DPO-LLaMA-3-8B-R, achieved state-of-the-art results on benchmarks such as AlpacaEval-2, Arena-Hard, MT-Bench, HumanEval, and TruthfulQA, outperforming larger models like GPT-3.5-turbo-1106. Additionally, an ablation study addressing length bias by incorporating a length penalty into the reward function effectively improved performance on length-control benchmarks. The approach integrates supervised fine-tuning with iterative RLHF, achieving state-of-the-art performance using fully open-source datasets.Arxiv: https://lnkd.in/eQP8ivYt 4
  • Shivendra Upadhyay Fast and memory-efficient thanks to QLoRAIn this article, we will see how to fine-tune LLMs for function calling#llms #chatgpt4 #llmops #aiml #raghttps://lnkd.in/dTksNxB2 2
  • Towards Data Science Follow along Chaim Rand's new model-optimization tutorial to learn how you can leverage PyTorch NestedTensors, FlashAttention2, and xFormers to boost transformers' performance and reduce costs. 20
  • Abhishek Murthy My intern at Schneider Electric and student at Northeastern University, Udisha Dutta Chowdhury, is presenting our work at PyData NYC tomorrow!Her talk is titled "Adopting Open-Source Tools for Time Series Forecasting: Opportunities and Pitfalls". In addition to introducing sktime and skforecast as case studies, she will also guide the attendees through the broader considerations of tool selection for forecasting. We will emphasize the importance of Data Understanding, Data Preparation, and Backtesting in building effective and reliable forecasting pipelines.If you are in the conference, please come say hi!The talk will also be posted on YouTube later.#techtalk #algorithms #forecasting #machinelearning #datascience 27 1 Comment
  • JiuFeng (Felix) Zhou User embeddings are crucial for the downstream personalized models, but larger embedding models come with higher latency, making real-time inference challenging. This often forces a trade-off:Small embedding model for real-time updates.Or, large embedding model with offline batch updates.Meta's recent paper proposes a hybrid approach: combine stale embeddings from a feature store with real-time embeddings from a complicated user model (with a timeout). If real-time calculations exceed the timeout, use stale embeddings; otherwise, use both. Completed real-time embeddings are then flushed to the feature store, improving future requests.Paper link: 28
  • Anyscale 🚀 See how Handshake cut their LLM GPU costs by 50% with Anyscale.Discover how they:💰 Reduced LLM GPU costs by 50% or more.📈 Seamlessly scaled large language models (LLMs) without compromising performance.⏱ Enhanced operational efficiency, enabling faster development cycles.Check out the full story here: https://lnkd.in/gSiKJDaY 37 2 Comments
  • Cameron R. Wolfe, Ph.D. Reinforcement learning (RL) is commonly used to finetune LLMs based on human feedback, but did you know that we can also use RL to automatically improve our prompts?Prompt engineering is the act of discovering (via trial and error) prompts that perform well. Small changes or rewordings to our prompt (or using a different LLM) can make a massive difference. For this reason, prompt engineering requires a lot of manual–and tedious–effort.Optimizing prompts: To automate this manual labor, researchers have proposed techniques for learning prompts from data. For example, prompt and prefix tuning [1, 2] add new (continuous) tokens to the prompt and finetune them. Methods like AutoPrompt [3] follow a similar strategy but keep learned tokens discrete so that they are human interpretable.Discrete optimization of tokens: The main difficulty with optimizing a prompt is that the prompt’s tokens are discrete. If we use the gradient to update a token, the likelihood of this update producing another valid token within the model’s vocabulary is effectively zero. Optimizing discrete tokens is not possible via traditional gradient-based algorithms.LLM generating prompts: One way we can optimize a prompt is by having a (separate LLM) output the prompt. Then, we can train the weights of the LLM–which are continuous–to produce better prompts instead of training the discrete prompt directly. But, how do we optimize this LLM? There’s no differentiable training signal that can be used for this.Using RL: To optimize the LLM generating the prompt, we can use RL! Our policy network is the LLM that generates the prompt. This policy network will output a prompt using next token prediction. Once a full prompt / sequence is generated, we derive the reward for this generation by measuring the prompt’s performance on a downstream task.Practical examples: Both RLPrompt [4] and TEMPERA [5] use RL to optimize discrete, human interpretable prompts. RLPrompt optimizes prompts in an offline fashion (i.e., optimize a prompt once and use it for inference many times), while TEMPERA optimizes prompts dynamically at inference time.“The resulting optimized prompts are often ungrammatical gibberish text; and surprisingly, those gibberish prompts are transferrable between different LMs to retain significant performance, indicating that LM prompting may not follow human language patterns.” - from RLPrompt paperDespite performing well, the prompts discovered by these techniques tend to be gibberish / ungrammatical, leading us to wonder whether prompting is a language of its own. The best prompts do not always follow standard rules applied to natural language. 230 8 Comments
  • Delightfully VirtuAl - Worldwide WFH Job Board- Don't forget to hit follow!
  • Siyan Zhao
  • Chen Peng Quentin H. and Yanwei (Wayne) Zhang on the Faire Data team recently did some great work fine-tuning Llama3 model to enhance search relevance. Here are some key insights from the work:• While prompt engineering is valuable, LLM fine-tuning is essential for a domain specific task like understanding semantic search relevance.• Fine-tuned open-source LLMs like Llama3 can deliver impressive performance, scalability, and cost-efficiency compared to using proprietary models.• Effective application of a fine-tuned LLM demands a well-defined problem, robust, high-quality labeled data, and ongoing model refinement. Ground-truth labeled data is crucial for maximizing model performance.• Fine-tuned Llama3-8b achieves performance on par with the larger Llama2–13b model.Read the blog post to dive deeper into these findings! https://lnkd.in/ecP_3E7SWe are also hiring (https://lnkd.in/dHfRZcPi) #LLM #AI #Finetuning #LLAMA3 #SearchRelevance #Hiring 92 3 Comments

Yi Chen - Google | LinkedIn (110)

Explore collaborative articles

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

Explore More

Others named Yi Chen in United States

  • Yi Chen Brooklyn, NY
  • Yi Chen Cary, NC
  • Yi Chen Stanford, CA
  • Yi Chen Scarsdale, NY
  • Yi Chen Los Angeles, CA

1580 others named Yi Chen in United States are on LinkedIn

See others named Yi Chen

Add new skills with these courses

  • 1h 7m Choose the Right Tool for Your Data: Python, R, or SQL
  • 1h 58m Applied Machine Learning: Algorithms
  • 50m Machine Learning with Python: k-Means Clustering

See all courses

Yi Chen - Google | LinkedIn (2025)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Greg O'Connell

Last Updated:

Views: 5748

Rating: 4.1 / 5 (62 voted)

Reviews: 93% of readers found this page helpful

Author information

Name: Greg O'Connell

Birthday: 1992-01-10

Address: Suite 517 2436 Jefferey Pass, Shanitaside, UT 27519

Phone: +2614651609714

Job: Education Developer

Hobby: Cooking, Gambling, Pottery, Shooting, Baseball, Singing, Snowboarding

Introduction: My name is Greg O'Connell, I am a delightful, colorful, talented, kind, lively, modern, tender person who loves writing and wants to share my knowledge and understanding with you.