It’s getting closer and closer to internship season. I’m preparing for my onsite with Twitter next month and I want to share my responses to the questions on my application and hopefully this would be helpful for some of you.
#TellYourStory: In 280 characters or less, share with us who you are through a hashtag and explain why you chose that hashtag.
The hashtag I choose is #WeTheStudents. This is a hashtag Hack Club uses, and my friends there are making a movement to change the landscape of how high school students engage with computer science and coding. I am extremely fortunate to be a part of this movement. It has changed many of my longheld personal beliefs: growing up as an only child, I was used to being protected by my parents and believed that students and young people have little to no say in deciding a future for themselves because of the lack of knowledge and/or experience. To me, the title “student” used to indicate powerlessness. But my experience learning to code and interacting with Hack Club’s members liberated me from constraining myself within the circle of my own family; it made me dare to adventure, live differently and always open to new knowledge. Now I am more than proud to be a student because this title signifies that I am forever a learner who uses knowledge/power to empower others and the power multiplies with #WeTheStudents.
#ShipIt: Our engineers are constantly shipping (launching) new features and functionality on the platform. In 280 characters or less, list one idea you would ship that would impact the way diverse users interact with the platform?
Twitter’s accessibility efforts have already greatly enhanced the usability of the platform for disabled groups. For example, Alt Text is one of the best features that enables a blind user to be able to access images on Twitter. However, its usability is compromised when content providers are unaware of this option, when it takes too much effort for them to manually provide the descriptions for every image, or when descriptions provided are not detailed enough. The use of specific machine learning algorithms and image recognition technologies can solve this problem by recognizing elements in the image, thus providing a detailed description. After the basic feature is implemented, further optimizations such as natural language used in descriptions can be done, but content providers can still use the manual input option during the transitioning stage of this feature.