During the time that I wrote Mino, I’ve been told a lot (specially during the early days before i knew what i know now) that i should consider switching from JavaScript, due to it’s limitations and non-existent typing system. I shall now quote my favourite things that have been said to me:
- “JavaScript is not very performant, and it’s so unsafe!”
- “If you want real performance, you should use C# or Rust”
- “JavaScript is shit, you should use Python”
- “At least use TypeScript for type-safety”
The TypeScript debate was always a heated one because i had a very clear standpoint on TypeScript that i have to this day.
You don’t really need TypeScript if you know how to write good JavaScript.
The Issue i personally had with TypeScript is that it added unnecessary boilerplate to my codebase that i didn’t need. I didn’t work with others and had a clear structure of what my code did. Also, using TypeScript back in 2022 with node was just annoying (honestly it still is) and i didn’t see any reason for myself to go through all of the pain just to add type-safety. I was confident enough to be able to say that i didn’t need TypeScript since i’ve written good JavaScript code.
Someday in 2024 i started to use TypeScript more frequently in my projects but the earliest thing i’ve built was Tyro by the end of 2022. An osu!api emulator fully written in TypeScript to experiment around.
By the end of 2024 i’ve fully migrated a JavaScript Project into TypeScript, Advance. The source code has not been updated by the time of writing this entry. The reason why is because it became a lot easier using TypeScript with a runtime like Bun, that supports TypeScript out of the box.
Back in june of this year i’ve also rewritten Mino in TypeScript, making a few improvements but it has never been released as a stable version. This feeling of using TypeScript more commonly in my projects gave me the idea why not take it a step further and learn a new language?
Because in the end, some had a point. JavaScript does reach it’s limitations when it comes to scaling, and Mino has been gathering over 1.5 million requests each month that have been handled.
How did we get here?
I’ve put my hands on a few programming langauges before including Python, C# and C++. I even had my hands on Go and Rust for a little and made the most rough sketch for imitating the download route and i’ve had to say, i didn’t really enjoy it that much. It felt overly complex and kind of hard to get into, so i dropped the idea and left it for a while before deciding to try it again in July with the most promising looking language, Go.
I decided to take another proper look into the language and did a lot of trial and error. A solid week later i had the first version that i started to understand how things work in Go and started to experiment even more and got down the basics like structs, error handling and modules.
Over the entire month of July, i’ve started to actually enjoy re-coding this project into Go but of course i faced some challenges because things obviously work different than in JavaScript.
NOTEFor the future of this post i’ll be just using TypeScript code in my examples.
The things i’ve noticed
Getting into Go was interesting because of the way how everything was handled.
Web Requests
- You didn’t have a native json function for web requests like you have in JavaScript.
- You actually have to handle things like closing the web body in order to not leak memory, which also extends to other systems like files.
- Making a simple web request usually ends up a little more boilerplate other than your typical fetch function in JavaScript.
try { const response = await fetch(`https://catboy.best/api`) const data = await response.json() as any} catch (err) { throw err}quickly became
req, err := http.NewRequest("GET", "https://catboy.best/api", nil)if err != nil { panic(err)}
resp, err := (&http.Client{}).Do(req)if err != nil { panic(err)}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)if err != nil { panic(err)}
var data map[string]interface{}
if err := json.Unmarshal(body, &data); err != nil { panic(err)}and the worst part is that parsing json in Go is another huge pain in the ass if you want to parse it into structs.
For those who are unaware structs are build like the following:
type User struct { ID int Username string}to parse json into a struct you have to define json fields on the struct.
type User struct { ID int `json:"id"` Username string `json:"username"`}if you were now to unmarshal json into the struct it would look like this
var user User
if err := json.Unmarshal(body, &user); err != nil { panic(err)}
//Now you can access things like user.ID and user.UsernameIf you had any extra fields, you just wouldn’t have them anywhere unlike in JavaScript. (Which honestly can be fine for most cases)
It does become a problem however, if you expect to have access to all dynamic fields at all times since you have to just know what payload you receive. With that in mind i made the basics and things seemed fine™ so i didn’t care too much.
Multithreading
God i cannot describe how much i love goroutines. Just throw a “go” before a function and it runs on a seperate thread. How awesome is that? JavaScript could never be that smooth. Nodejs did introduce worker threads for a while but eh, it’s just not the same and harder to manage. Speaking of harder to manage!
Promises (async)
JavaScript really spoiled me with promises, although Golang does not really need the same principle since goroutines make multithreading more available and if you want to wait for all the goroutines to finish, you just add a WaitGroup, call wg.Wait() and that’s it.
async function networkTask(): Promise<number> { await Bun.sleep(1000) return 1}
async function ioTask(): Promise<number> { await Bun.sleep(2000) return 1}
//Instead of calling:
await networkTask() //takes 1 secondawait ioTask() //takes 2 seconds//Total wait time: 3 seconds
//you instead use:
await Promise.all([ networkTask(), ioTask()])//for a total wait time of 2 seconds since we wait for the slowest to finish loading in parallel.in Go the same principle looks like this:
var wg sync.WaitGroup
func main(){ //instead of: networkTask() ioTask()
//use: wg.Add(2) go func(){ defer wg.Done() networkTask() }
go func(){ defer wg.Done() ioTask() }
wg.Wait()}
func networkTask() int { time.Sleep(time.Second) return 1}
func ioTask() int { time.Sleep(time.Second * 2) return 1}Works perfectly fine and isn’t much different from javascript while doing “true” multithreading.
Yes i know there are channels for promises, but eh i don’t want to get into that here.
I was very fascinated by the way how ratelimiting worked on multithreading, that i even made a JavaScript module to make it easier there too.
The works
Anyways, i worked on this Go version a lot and even worked on it during Cavoe’s osu! Event 2025 in August with feedback of a few friends of mine. Shoutout to Marti (you absolute 👑) for letting me use his laptop to actively code on it since i left mine at home.
I managed to cut loading times during routes like audio preview or novideo files, improving them up to a whopping 50x performance increase.
To put some perspective; serving the .osz of the Unforgiving without video took:
- about 5 seconds on Mino Version 4
- 2.5 seconds with multithreading on the unreleased Mino Version 5
- while it got reduced to only 150ms in Go.
you can see similar improvements for audio preview and raw audio serving, which makes it a huge success.
After a lot of testing around with the database, i’ve now decided to take a hybrid approach; Filling meilisearch with only necessary data that is relevant for searching, while storing the full data now in sqlite. Yes, you’ve heard this correctly. I’m using sqlite for a high scale application because Mino actually reads most of the time when it’s not actively crawling from 0, which makes sqlite a perfect fit since it’s lightweight, doesn’t not need to be installed as an extra dependency and is also blazing fast, having a very low response time.
This also made me rethink my approach on my structs because if you ever looked at osu’s api v2, it gets messy really quick. Instead of treating every field in a column i just dynamically store the json as it’s own data field. This way it stays dynamically updated at all times so i always stay compliant with the newest api v2 spec. It also means i don’t need to parse the entire json anymore for each entry, which also saves a little time.
Now if you’ve read the first post of mine, you might ask me: But Nanoo, what about Cheesegull?
Yes, i thought about that, but since i don’t even need most of the fields. Using go’s native unmarshal would be wasteful, because it goes through the entire json. Instead i use gjson and only select the fields i need. (which in my case is about 10 fields out of like 25 or so) and save myself a lot of memory, since i hold less bytes and cpu time.
So far that has been my Journey with Amigo, and i’ll keep you updated.