Yukti Arora
Product Designer & Former Designer Founder
I specialize in zero to one enterprise products.
My experience makes me fluent in driving clarity in ambiguous problem spaces, with a strong focus on balancing business and user needs.
I’m experimenting with AI, designing experiences that feel less like tools and more like collaborators.
Notable things about me.
Notable things
about me.
Work. Examples of my Thinking + Making
Step 1: Splitting sheets
I worked closely with the engineering team to figure out the feasibility of automatically splitting the PDF, during upload and within the browser.Step 1.2: Data Extraction Through Template (Only first time)
The next step was naming the sheets and adding additional metadata to it. From my research, I knew all this data was present in the "Title Block".Step 2: Review Drawings
This step ensures accuracy by enabling human review of OCR results before upload - maintaining accuracy and control.Step 1: Splitting sheets
I worked closely with the engineering team to figure out the feasibility of automatically splitting the PDF, during upload and within the browser.Step 1.2: Data Extraction Through Template (Only first time)
The next step was naming the sheets and adding additional metadata to it. From my research, I knew all this data was present in the "Title Block".Step 2: Review Drawings
This step ensures accuracy by enabling human review of OCR results before upload - maintaining accuracy and control.Step 1: Splitting sheets
I worked closely with the engineering team to figure out the feasibility of automatically splitting the PDF, during upload and within the browser.Step 1.2: Data Extraction Through Template (Only first time)
The next step was naming the sheets and adding additional metadata to it. From my research, I knew all this data was present in the "Title Block".Step 2: Review Drawings
This step ensures accuracy by enabling human review of OCR results before upload - maintaining accuracy and control.Step 1: Splitting sheets
I worked closely with the engineering team to figure out the feasibility of automatically splitting the PDF, during upload and within the browser.Step 1.2: Data Extraction Through Template (Only first time)
The next step was naming the sheets and adding additional metadata to it. From my research, I knew all this data was present in the "Title Block".Step 2: Review Drawings
This step ensures accuracy by enabling human review of OCR results before upload - maintaining accuracy and control.
Step 1: Splitting sheets
I worked closely with the engineering team to figure out the feasibility of automatically splitting the PDF, during upload and within the browser.Step 1.2: Data Extraction Through Template
The next step was naming the sheets and adding additional metadata to it. From my research, I knew all this data was present in the "Title Block".Step 2: Review Drawings
This step ensures accuracy by enabling human review of OCR results before upload - maintaining accuracy and control.Step 1: Splitting sheets
I worked closely with the engineering team to figure out the feasibility of automatically splitting the PDF, during upload and within the browser.Step 1.2: Data Extraction Through Template
The next step was naming the sheets and adding additional metadata to it. From my research, I knew all this data was present in the "Title Block".Step 2: Review Drawings
This step ensures accuracy by enabling human review of OCR results before upload - maintaining accuracy and control.Step 1: Splitting sheets
I worked closely with the engineering team to figure out the feasibility of automatically splitting the PDF, during upload and within the browser.Step 1.2: Data Extraction Through Template
The next step was naming the sheets and adding additional metadata to it. From my research, I knew all this data was present in the "Title Block".Step 2: Review Drawings
This step ensures accuracy by enabling human review of OCR results before upload - maintaining accuracy and control.Step 1: Splitting sheets
I worked closely with the engineering team to figure out the feasibility of automatically splitting the PDF, during upload and within the browser.Step 1.2: Data Extraction Through Template
The next step was naming the sheets and adding additional metadata to it. From my research, I knew all this data was present in the "Title Block".Step 2: Review Drawings
This step ensures accuracy by enabling human review of OCR results before upload - maintaining accuracy and control.
Step 1: Splitting sheets
I worked closely with the engineering team to figure out the feasibility of automatically splitting the PDF, during upload and within the browser.Step 1.2: Data Extraction Through Template (Only first time)
The next step was naming the sheets and adding additional metadata to it. From my research, I knew all this data was present in the "Title Block".Step 2: Review Drawings
This step ensures accuracy by enabling human review of OCR results before upload - maintaining accuracy and control.Step 1: Splitting sheets
I worked closely with the engineering team to figure out the feasibility of automatically splitting the PDF, during upload and within the browser.Step 1.2: Data Extraction Through Template (Only first time)
The next step was naming the sheets and adding additional metadata to it. From my research, I knew all this data was present in the "Title Block".Step 2: Review Drawings
This step ensures accuracy by enabling human review of OCR results before upload - maintaining accuracy and control.Step 1: Splitting sheets
I worked closely with the engineering team to figure out the feasibility of automatically splitting the PDF, during upload and within the browser.Step 1.2: Data Extraction Through Template (Only first time)
The next step was naming the sheets and adding additional metadata to it. From my research, I knew all this data was present in the "Title Block".Step 2: Review Drawings
This step ensures accuracy by enabling human review of OCR results before upload - maintaining accuracy and control.Step 1: Splitting sheets
I worked closely with the engineering team to figure out the feasibility of automatically splitting the PDF, during upload and within the browser.Step 1.2: Data Extraction Through Template (Only first time)
The next step was naming the sheets and adding additional metadata to it. From my research, I knew all this data was present in the "Title Block".Step 2: Review Drawings
This step ensures accuracy by enabling human review of OCR results before upload - maintaining accuracy and control.
At Buildsys, one of our breakthrough features was Automating Construction Drawing Upload.
I transformed a two-day, error-prone process into a seamless 20-minute workflow—a 100x improvement—by leveraging AI-driven automation while balancing technical constraints and user needs.
Founder & Design Lead
2019
UX Research
MVP Identification
UX Design
Interaction Design
As Co-founder and Chief Product Officer of Buildsys, a construction productivity SaaS, I led the journey from identifying market gaps to achieving product-market fit and scaling to an enterprise SaaS business in 10 cities across India, culminating in a successful exit.
One of the pivotal features that helped us break into the market was Automating Construction Drawing Upload, a solution that transformed a manual, error-prone process into a seamless, efficient workflow.
The user problem was a tedious manual process of uploading construction drawings and it’s meta data into folders on the cloud - a painful process with a lot of room for mistakes. Across construction projects, we consistently observed that stakeholders using outdated revisions would cause costly delays and rework, representing a significant business opportunity.
I led the development of a feature that reduced this process from 2 days to 20 minutes, a 100x improvement. Solving this problem was particularly challenging due to team resource and technical constraints. I made crucial tradeoffs to deliver a functional, user-friendly solution that addressed core pain points while adhering to development constraints - giving me my mantra: #UserLikesBusinessLoves.
Instant, non-intimidating, and personalized AI-based support
Call, chat, and breathing exercises use familiar interaction patterns for a warm, personalized experience.
Instant, non-intimidating, and personalized AI-based support
Call, chat, and breathing exercises use familiar interaction patterns for a warm, personalized experience.
Instant, non-intimidating, and personalized AI-based support
Call, chat, and breathing exercises use familiar interaction patterns for a warm, personalized experience.
Elpis is an AI based self help app for women coping with miscarriage.
I simplified a complex subject matter through iterative UX testing and intuition.
Design Lead
2024
Research
UX Testing
Strategy
Design System
UX Design
Visual Design
Interaction Design
Impact
Yukti! I just wanted to say that your ability to lean in, ask questions, and hold space on such a sensitive topic was amazing and it showed in the project you developed that felt so tailored to our experience. You're brilliant!
- Research Participant

Miscarriage is a sensitive topic. Its research was challenging and required careful handling. It was also a very difficult subject matter to empathize with.
Elpis originated as part of a research class on designing for health emergencies, where my team focused on improving mental health support for women after miscarriage.
I led key aspects of the research, including user recruitment, interviews, and early-stage prototyping. At the end of the class, we tested three different concepts.
After the course, I independently expanded the project to a full scale UX solution. I refined the idea using Google's HEART metrics - carefully assessing signals - balancing between feasibility and desirability through intuition and incorporating AI to pivot in a new direction. The new direction went through another round of UX testing, refinements as well as development of a visual design and interaction patterns that deeply resonated with the users.
Impact
Yukti! I just wanted to say that your ability to lean in, ask questions, and hold space on such a sensitive topic was amazing and it showed in the project you developed that felt so tailored to our experience. You're brilliant!
- Research Participant

Impact
Yukti! I just wanted to say that your ability to lean in, ask questions, and hold space on such a sensitive topic was amazing and it showed in the project you developed that felt so tailored to our experience. You're brilliant!
- Research Participant

Natya.AI is an app for Bharatnatyam dancers that enhances practice time by providing real time voice and visual feedback on your live practice recording using multi modal AI.
Get live feedback during practice
The yellow overlay, maps your body movement, while the red one is telling the correct way of doing something.Reduce review fatigue
This interface splits the entire video into short snippets of good and bad.Actionable Summary
I focused on making the feedback actionable.Movement Library & Compare Movements
Auto captures and classifies unique micro sequences.
As a dancer, see how your movements have improved over time.Natya.AI is an app for Bharatnatyam dancers that enhances practice time by providing real time voice and visual feedback on your live practice recording using multi modal AI.
Get live feedback during practice
The yellow overlay, maps your body movement, while the red one is telling the correct way of doing something.Reduce review fatigue
This interface splits the entire video into short snippets of good and bad.Actionable Summary
I focused on making the feedback actionable.Movement Library & Compare Movements
Auto captures and classifies unique micro sequences.
As a dancer, see how your movements have improved over time.Natya.AI is an app for Bharatnatyam dancers that enhances practice time by providing real time voice and visual feedback on your live practice recording using multi modal AI.
Get live feedback during practice
The yellow overlay, maps your body movement, while the red one is telling the correct way of doing something.Reduce review fatigue
This interface splits the entire video into short snippets of good and bad.Actionable Summary
I focused on making the feedback actionable.Movement Library & Compare Movements
Auto captures and classifies unique micro sequences.
As a dancer, see how your movements have improved over time.Natya.AI is an app for Bharatnatyam dancers that enhances practice time by providing real time voice and visual feedback on your live practice recording using multi modal AI.
Get live feedback during practice
The yellow overlay, maps your body movement, while the red one is telling the correct way of doing something.Reduce review fatigue
This interface splits the entire video into short snippets of good and bad.Actionable Summary
I focused on making the feedback actionable.Movement Library & Compare Movements
Auto captures and classifies unique micro sequences.
As a dancer, see how your movements have improved over time.
Natya.AI is an app for Bharatnatyam dancers that enhances practice time by providing real time voice & visual feedback on your live practice recording using multi modal AI.
Get live practice feedback
The yellow overlay tracks body movements while the red one shows corrections.Reduce review fatigue
This interface splits the entire video into short snippets of good and bad.Actionable Summary
I focused on making the feedback actionable.Movement Library & Compare Movements
Auto captures and classifies unique micro sequences.
As a dancer, see how your movements have improved over time.Natya.AI is an app for Bharatnatyam dancers that enhances practice time by providing real time voice & visual feedback on your live practice recording using multi modal AI.
Get live practice feedback
The yellow overlay tracks body movements while the red one shows corrections.Reduce review fatigue
This interface splits the entire video into short snippets of good and bad.Actionable Summary
I focused on making the feedback actionable.Movement Library & Compare Movements
Auto captures and classifies unique micro sequences.
As a dancer, see how your movements have improved over time.Natya.AI is an app for Bharatnatyam dancers that enhances practice time by providing real time voice & visual feedback on your live practice recording using multi modal AI.
Get live practice feedback
The yellow overlay tracks body movements while the red one shows corrections.Reduce review fatigue
This interface splits the entire video into short snippets of good and bad.Actionable Summary
I focused on making the feedback actionable.Movement Library & Compare Movements
Auto captures and classifies unique micro sequences.
As a dancer, see how your movements have improved over time.Natya.AI is an app for Bharatnatyam dancers that enhances practice time by providing real time voice & visual feedback on your live practice recording using multi modal AI.
Get live practice feedback
The yellow overlay tracks body movements while the red one shows corrections.Reduce review fatigue
This interface splits the entire video into short snippets of good and bad.Actionable Summary
I focused on making the feedback actionable.Movement Library & Compare Movements
Auto captures and classifies unique micro sequences.
As a dancer, see how your movements have improved over time.
Natya.AI is an app for Bharatnatyam dancers that enhances practice time by providing real time voice & visual feedback on your live practice recording using multi modal AI.
Get live feedback during practice
The yellow overlay, maps your body movement, while the red one is telling the correct way of doing something.Reduce review fatigue
This interface splits the entire video into short snippets of good and bad.Actionable Summary
I focused on making the feedback actionable.Movement Library & Compare Movements
Auto captures and classifies unique micro sequences.
As a dancer, see how your movements have improved over time.Natya.AI is an app for Bharatnatyam dancers that enhances practice time by providing real time voice & visual feedback on your live practice recording using multi modal AI.
Get live feedback during practice
The yellow overlay, maps your body movement, while the red one is telling the correct way of doing something.Reduce review fatigue
This interface splits the entire video into short snippets of good and bad.Actionable Summary
I focused on making the feedback actionable.Movement Library & Compare Movements
Auto captures and classifies unique micro sequences.
As a dancer, see how your movements have improved over time.Natya.AI is an app for Bharatnatyam dancers that enhances practice time by providing real time voice & visual feedback on your live practice recording using multi modal AI.
Get live feedback during practice
The yellow overlay, maps your body movement, while the red one is telling the correct way of doing something.Reduce review fatigue
This interface splits the entire video into short snippets of good and bad.Actionable Summary
I focused on making the feedback actionable.Movement Library & Compare Movements
Auto captures and classifies unique micro sequences.
As a dancer, see how your movements have improved over time.Natya.AI is an app for Bharatnatyam dancers that enhances practice time by providing real time voice & visual feedback on your live practice recording using multi modal AI.
Get live feedback during practice
The yellow overlay, maps your body movement, while the red one is telling the correct way of doing something.Reduce review fatigue
This interface splits the entire video into short snippets of good and bad.Actionable Summary
I focused on making the feedback actionable.Movement Library & Compare Movements
Auto captures and classifies unique micro sequences.
As a dancer, see how your movements have improved over time.
Natya.AI, is an AI companion for dance practice.
I crafted the vision for an AI-powered practice companion for dancers, making long-distance learning affordable and accessible.
Product Designer
2024
UX Research
Facilitation
Interaction Design
Visual Design
Natya is part of my Master's thesis exploring how Artificial Intelligence can bridge the gap between tradition and technology, making Bharatnatyam—a 2,500-year-old Indian classical dance form—more accessible to today's dancers.
Through interviews with 20+ AI experts and dancers, as well as surveys with 300+ Bharatanatyam practitioners as well throu, I identified pain points deeply rooted in the practice-review-feedback loop.
Natya was born from the insight "80% of dance is practice and 80% of practice is correction." Dancers record their practice sessions to visualize their movements in space.
The two main challenges were the lack of live feedback and the fatigue caused by reviewing practice recordings.
This led me to hypothesize:
What if Bharatanatyam dancers could receive real-time prompts, mimicking the in-class Guru experience, while reducing review fatigue by at least 50%?
Designing the voice interface involved role-playing feedback scenarios to ensure prompts felt intuitive and unobtrusive. For visual feedback, I tested various body-tracking algorithms to identify constraints and prioritize actionable insights.
After several iterations and user testing, I introduced innovations such as the “splits” interface that made the review process faster, and automated feedback annotations which made the feedback more actionable.
Impact
The voice prompts felt like having my Guru in the room. It will completely change the way I approach solo practice.
- Varshini, Bharatnatyam Dancer & Performer
This is the best application of AI I have seen so far.
- Sam Potts, Designer @ Apple
Natya is part of my Master's thesis exploring how Artificial Intelligence can bridge the gap between tradition and technology, making Bharatnatyam—a 2,500-year-old Indian classical dance form—more accessible to today's dancers.
Through interviews with 20+ AI experts and dancers, as well as surveys with 300+ Bharatanatyam practitioners as well throu, I identified pain points deeply rooted in the practice-review-feedback loop.
Natya was born from the insight "80% of dance is practice and 80% of practice is correction." Dancers record their practice sessions to visualize their movements in space.
The two main challenges were the lack of live feedback and the fatigue caused by reviewing practice recordings.
This led me to hypothesize:
What if Bharatanatyam dancers could receive real-time prompts, mimicking the in-class Guru experience, while reducing review fatigue by at least 50%?
Designing the voice interface involved role-playing feedback scenarios to ensure prompts felt intuitive and unobtrusive. For visual feedback, I tested various body-tracking algorithms to identify constraints and prioritize actionable insights.
After several iterations and user testing, I introduced innovations such as the “splits” interface that made the review process faster, and automated feedback annotations which made the feedback more actionable.
Impact
The voice prompts felt like having my Guru in the room. It will completely change the way I approach solo practice.
- Varshini, Bharatnatyam Dancer & Performer
This is the best application of AI I have seen so far.
- Sam Potts, Designer @ Apple
Natya is part of my Master's thesis exploring how Artificial Intelligence can bridge the gap between tradition and technology, making Bharatnatyam—a 2,500-year-old Indian classical dance form—more accessible to today's dancers.
Through interviews with 20+ AI experts and dancers, as well as surveys with 300+ Bharatanatyam practitioners as well throu, I identified pain points deeply rooted in the practice-review-feedback loop.
Natya was born from the insight "80% of dance is practice and 80% of practice is correction." Dancers record their practice sessions to visualize their movements in space.
The two main challenges were the lack of live feedback and the fatigue caused by reviewing practice recordings.
This led me to hypothesize:
What if Bharatanatyam dancers could receive real-time prompts, mimicking the in-class Guru experience, while reducing review fatigue by at least 50%?
Designing the voice interface involved role-playing feedback scenarios to ensure prompts felt intuitive and unobtrusive. For visual feedback, I tested various body-tracking algorithms to identify constraints and prioritize actionable insights.
After several iterations and user testing, I introduced innovations such as the “splits” interface that made the review process faster, and automated feedback annotations which made the feedback more actionable.
Impact
The voice prompts felt like having my Guru in the room. It will completely change the way I approach solo practice.
- Varshini, Bharatnatyam Dancer & Performer
This is the best application of AI I have seen so far.
- Sam Potts, Designer @ Apple
In case you’re curious.
Here’s my story.



I studied Industrial Design at Ohio State University, where I discovered that my greatest passions and abilities were at the intersection of visual thinking, digital product design, and business strategy. These are skills I learned by doing at my startups. I realized I wanted to work on problems that excite me and could impact the world.
In search of impact, I founded Buildsys, a construction productivity SaaS. At Buildsys, I learned what it takes to manage a creative culture, built teams that approach problems from a human-centered perspective, crafted business, sales, and marketing strategies, and designed a product that is easy to use and adopt.
From 2017 to 2022, I scaled Buildsys from an idea to a successful enterprise SAAS business with clients in 10 cities across India and exited the company in early 2022.
I created Wazo Space Station, an interactive digital dollhouse set in outer space- designed for play, storytelling, and fun for kids ages 3-7.
I'm in love with the power of play and curious about the interplay of design, technology, and play as a way to have an impact on how we learn, tell stories, spark curiosity, build connections, and collaborate.
All of these experiences makes me fluent in driving clarity in ambiguous problem spaces. I also realized it’s about tradeoffs - that balances business and user needs.
In 2024, I graduated with an MFA in HCI from the School of Visual Arts, New York, where my thesis explored the interplay between dance and AI.
When not doing a serious play, I am probably making a pretty looking salad, drawing with friends, climbing, dancing, snowboarding or dreaming of the outdoors.
Design for me is Thinking + Making
done with people, in an iterative fashion.
3 Things I bring to teams:
I surround myself with smart people, collaborate and focus on the right tradeoffs. #UsersLoveBusinessLoves
Impact
I surround myself with smart people, collaborate and focus on the right tradeoffs. #UsersLoveBusinessLoves
Impact
I like to jump into research with no pre notions. I design with people - I simple co create with them and facilitate with intention.
Serious Play
I like to jump into research with no pre notions. I design with people - I simple co create with them and facilitate with intention.
Serious Play
I care and think deeply about bringing the best out of products, people and systems.
Product Direction & Leadership
I care and think deeply about bringing the best out of products, people and systems.
Product Direction & Leadership
Writings, Press,Talks.
From cozy workshops to global stages. I’m passionate about sharing stories. Some of writings have been featured in Times of India, Economic Times and The Sunday Times. (Leading Newspapers in India).
Featured on @design.stri #31Days31Voices campaign for Women’s Day alongside 30 amazing women in business, design and architecture.
Year
Type
Title