Good questions @ijabz!
That’s true, and in that case users just do what they do now: orchestrate multiple queries over multiple round-trips to construct the data they need – they don’t just give up and only design their use cases around needing 1 API request. So at the very least, the GraphQL endpoint is giving you the benefit of not needing to do that orchestration yourself – it makes requests in parallel, uses cached responses where possible, and handles the rate-limiting and retrying for you.
Unreasonably large queries could overwhelm the server (if we’re talking about a non-rate-limited implementation) but that’s something people have solved before: for example, there are many public SPARQL query servers (another, more well-established query language where you can construct arbitrarily complex queries across large graphs of data) – Wikidata and DBpedia’s SPARQL endpoints come to mind – and they simply have a limit on how long a query can take (their backend can presumably cancel pending queries on the server-side once the limit is reached). That’s one potential solution.
Another solution might be to allow the GraphQL server to make non-rate-limited REST API calls to fulfill queries, but cap the number of REST API requests it can make during the course of fulfilling a query. Then, sure, we wouldn’t be allowing unlimited complexity, but it would be reasonable & prevent abuse.
Anyway, we gotta assume that most requests are of a reasonable nature, otherwise MusicBrainz would be overloaded even worse in its current implementation, because people would still be making those requests but even less efficiently.
Having a GraphQL implementation deployed at the source is kind of my dream goal and an interesting idea the Metabrainz team should consider! In general, yes, implementing this as a translation layer to direct database queries (instead of the current REST API) would indeed be the ideal case, and open the door for more query optimizations.
Sort of – my particular implementation tries to make the most minimal REST API requests possible to satisfy the user’s query – the same as the user would need to do to orchestrate the same data fetching themselves. In other words, it’s not just requesting
kitchen-sink-rels, etc. – it inspects the requested fields and makes a minimal set of requests.
As long as that’s the case, it’s not actually a problem that more fields are actually retrieved on the backend than are requested: adding more fields to the response actually doesn’t increase latency all that much if at all – but a big factor is response payload size, which makes a far greater difference on slow/mobile connections than the extra time it takes the database to return an extra field. So you’re still getting quite a big win by allowing minimal response payloads via GraphQL, even if secretly on the backend some data is “thrown away”.
Another advantage is that the GraphQL server can control the cache policy. Currently, my implementation caches the full response from every REST API call for 24 hours – that’s why many of the example queries on the GitHub page load instantly. So when I said “thrown away” before, what it’s actually doing is caching those fields for potential re-use, and just not returning them in the query response. If someone makes a slightly different query (requesting different fields) that happens to translate to similar underlying REST API calls, there’s a good chance it won’t need to hit the real REST API at all.
In summary: yes, there’s absolutely a ton more optimization possible to make this more feasible for public consumption, but there are still advantages even with the current implementation. In general, I think the approach to APIs should not be “protect our server from doing too much work by forcing users to contort themselves & make many piecemeal queries” but rather “allow whatever queries we can to satisfy our users’ needs, and figure out a way to mitigate the unreasonable ones.”