Fasta.ai beta version was released as a A.I. training software for social media’s posts. Users, which are the social media analysts, needs to categorize the posts for A.I. to learn and then get insights from the results of A.I. prediction. In the beta version, if users want to get insights from A.I., users needed to click a predefined category name, then the subcategory name, and type in the example posts or import pre-categorized social media posts from an excel sheet. The process is tedious and the layout creates a lot confusing.
Before diving into the design process, we need to understand how A.I. works.
Scrape social media’s post for users
Apply an A.I. model on a raw feed and get insights from the application
Research
We interviewed 6 people who worked as social media managers and strategists with the age ranging from 25 to 42. All the interviewees had already used the beta version of Fasta.ai but they did not like to use it.
Digging into all of the user feedback and research, we summarized the key user problems. Here are the quotes from users and the persona summarized from the research:
After getting a deeper understanding of the users, we redesigned the workflow of Fasta.ai that allows users to get data and apply A.I. model flexibly. Users can choose raw data from social media for a specific set of brands/ companies/ subjects. Then, users can In every AI model, there were topics and sub-topics. Users can choose which topics within an AI model to analyze raw data.
When developing the prototypes, we constantly discussed with the data science team to explore options that could build an efficient tagging flow within the technical constraints. From the designer’s standpoint, our goal is to give users the freedom and flexibility to get the feed from social media by themselves and also allow them to apply any A.I. model to analyze those feeds.
In the beginning, because of the technical constraints, users could only scrape data on request through emailing us, and then we set the scraping task for them in their accounts.
Later, users can scrape data by themselves but not yet able to apply an A.I. model to the data. From a business and development standpoint, it would be beneficial to roll out the feature soon than later. While the development team was still working on the feature of applying an A.I model to the data, we built a dashboard that helps users transferring to the new features that would be rolled out in the near future.
After the full development of the AI model feature, we developed a prototype for users to set a monitor that could scrape data and apply A.I. model flexibly.
This project provided real value for me to learn the development of the application. However, the shifting priorities and changing roadmaps delayed the launch of this feature. Still, I learned some important takeaways from this project related to product and business processes.
Collaboration and communication are keys
Members in our team each took responsibility in designing different parts of the system but we always reviewed our short-term results after each sprint, making sure that the product was cohesive throughout. Moreover, it was important to deliver our design concepts to data scientists and developers clearly to make sure that the developing team and design team were on the same page.
What could we make it further?
As we rushed to roll out the "create monitor" function, we have not done enough usability testings in understanding the weaknesses of the experiences. We should continue to do usability tests and iterate the design of the monitor function in the future.