iSpeakerReact development history
How it began
Section titled “How it began”Before it was known as iSpeakerReact, the project was called iSpeaker.
It originated from the version included on the CD-ROM of the Oxford Advanced Learner’s Dictionary 9th edition in 2015.
Developed by Oxford University Press, the tool came packed with features that were transfered to today’s iSpeakerReact. It was built using AngularJS ↗ and jQuery ↗.
However, due to its reliance on Node.js, it couldn’t run in standard web browsers. So we modified its code to make it browser-compatible. Unfortunately, there were some compromises that we had to make. For example, the recording feature had to be removed, as we didn’t know how to implement it at that time. The app was also quite buggy and not user-friendly, especially on mobile devices.
The first revamp
Section titled “The first revamp”In 2023, we gave the interface its first major redesign (pull request ↗). We adopted Bootstrap 5 to modernize the UI.
Thanks to Bootstrap 5, the app became more visually appealing and responsive on mobile, and we introduced dark theme support for the first time.
Despite the visual improvements, the core of the app still relied on AngularJS, which had been deprecated in 2022. It became clear that a complete rebuild was necessary.
Rethinking the app
Section titled “Rethinking the app”To address the shortcomings of the old app, we began considering a complete rewrite.
Our first thought was to migrate to Angular ↗, the modern successor to AngularJS. But we found it quite difficult to learn, as it differed significantly from AngularJS.
We then explored Vue.js ↗. But learning single-page app (SPA) development from scratch proved to be a challenge. We struggled even with building the homepage, constantly tweaking CSS and trying to integrate Bootstrap’s vanilla JavaScript 🤦♂️. Eventually, we gave up on Vue.
The second revamp
Section titled “The second revamp”In March 2024, after gaining access to ChatGPT, we asked it to help us rebuild the app using React ↗. ChatGPT helped us create the project’s basic structure, which was a great starting point. We used Create React App ↗ to scaffold the project.
In this React version, we successfully implemented the recording feature. However, we ran into issues with audio playback on iOS 16. Despite spending nearly a month troubleshooting, we couldn’t resolve it at the time.
Five months later, we resumed work. By then, Create React App was being deprecated, so we switched to Vite ↗. While CRA was beginner-friendly, Vite proved faster and better suited for our needs—especially for deploying to GitHub Pages.
We began implementing the key features from the original app, starting with the Conversation and Exam sections, followed by the Exercise section.
On September 6, 2024, the new version was merged into the main branch—after months of coding, debugging, and consulting both ChatGPT and Claude.
To reflect the changes, we renamed the app to iSpeakerReact, representing both its modern tech stack and the enhanced features inspired by the original tool.
Electron support
Section titled “Electron support”To make the app work offline like normal desktop apps, we added Electron support (pull request ↗).
The original version, iSpeaker: Pronunciation Tool, was once available on the Microsoft Store. It was based on the first revamp and bundled with audio/video files. However, the file size—around 3GB—made it difficult to distribute and update.
With Electron support, we removed the bundled video files and added a feature to download them from online sources, reducing the size to about 700MB.
We also implemented logging in the Electron version to help users report bugs more easily.
To automate releases, we wrote a script that builds and publishes the app to GitHub Releases. Getting this working was tough—we waited ~15 minutes each time just to see an error 😫. After much trial and error (and help from both ChatGPT and Claude), we got it working. Manual builds were an option, but they weren’t as secure or verifiable, which goes against our missions.
Localization support
Section titled “Localization support”Initially, all UI text was hardcoded in English. We later added localization support (pull request ↗), allowing the app to be translated into different languages.
The first supported language was Chinese, thanks to @wekik.
For translation management, we started with Crowdin, but quickly hit its free-tier limit. After applying for the open-source plan, we continued for a while—until Crowdin abruptly suspended our project without notice. That was a huge setback. (Note to future projects: don’t use Crowdin.)
We then switched to Weblate ↗, which turned out to be a perfect fit. It’s open-source, has generous limits, and integrates smoothly with GitHub. While the setup was initially complex, it was worth it in the long run.
The third revamp
Section titled “The third revamp”While Bootstrap gave iSpeakerReact a solid foundation, we felt it made the UI look too similar to other Bootstrap-based websites.
Its rigid design system also limited our customization options.
So we adopted TailwindCSS ↗ and daisyUI ↗. This combo allowed us to create a more unique and flexible design system. We chose green as our primary color for its calming, eye-friendly appeal—better than the default blue.
Before daisyUI, using Tailwind alone was challenging. Its flexibility made it difficult to maintain consistent styling, and we had to write a lot of repetitive code. daisyUI streamlined the process while still offering the flexibility Tailwind is known for. Unlike Bootstrap, we didn’t need custom CSS/Sass to override styles—we just used Tailwind classes.
This third revamp is the app’s current iteration.
The Word section
Section titled “The Word section”Over time, we began adding new features that weren’t present in the original app—or simply weren’t feasible before.
The first major addition was the Word section (pull request ↗). It helps learners practice pronunciation of common words from the Oxford 3000™ and 5000™ lists. Each word is broken into syllables, with primary and secondary stress clearly highlighted.
A standout feature is real-time syllable highlighting, showing learners exactly where and how to pronouce each syllable. There’s also a slow playback mode to help learners practice at a more comfortable pace.
Future plans
Section titled “Future plans”We’re continuously improving the app—fixing bugs, adding features, and improving security as needed.
If you have feedback or suggestions, feel free to open an issue on GitHub ↗.