Contact Information
- Nickname: failure
- IRC nick/Matrix handle: failure / @failure:4d2.org
- Email: shirsakm@protonmail.com
- GitHub: shirsakm (failure) · GitHub
- Project Size: Large (350 hours)
I love FOSS, from the community aspect of it all, to the awesome tools and apps that I got to use everyday, till I wanted to contribute myself. So yes, I have made contributions to FOSS projects before, from cool stuff like Infinite Chess to WintrChess. I have also made Nightlio–my own FOSS alternative to an app called Daylio–mostly because I needed it, and I later registered it for a GitHub hack, and was one of the winners! While, it did get some good community support, my willpower to maintain it fizzled out a while back.
To me, community is open-source. There’s tales of great projects dying due to toxic communities, and small projects that live for decades with a blossoming welcoming community. If you have skipped on interactions, well, you have missed half of what open-source is even about!
Proposed Project
While I had a different idea initially, I realized it would take more time than I had to dedicate (to craft a feasible proposal itself!), so I picked one of the proposals off the shelf.
As a prelude, it is my belief–and you will stumble upon this multiple times as you read this proposal–that the best way to design new infrastructure is to work it off of existing functional infrastructure, and make only the changes required. You will see this is in my schema choices, in my frontend mockups, and pretty much everywhere I discuss change. So, I believe we must first take a look at the existing infrastructure on MusicBrainz and how it stores events.
MusicBrainz uses a highly normalized relational model (thank you, Comp Sci degree for the technical jargon). A single event is distributed across multiple strict tables:
musicbrainz.event: Core event data. Like dates, times, et al.musicbrainz.artist: Artist metadata.musicbrainz.l_artist_event: Intersection table between artists and events.musicbrainz.link&musicbrainz.link_type: Defines the artists relationships to the events. I am not sure how to explain this one better.
This is fine for MB, but for our purposes in LB, this level of normalization is unnecessary.
1. Database
So, based on this preliminary exploration, I propose the creation of 3 new tables, to make a relatively denormalized cache.
mapping.mb_event_cache: Ingests most of the core event data, and extra flexible data into JSONB column. Stored in Timescale database.mapping.mb_event_artist_cache: Intersection table for artist data, and the event records. More on this later. Stored in Timescale database.public.user_artist_relationship: Used to track user-artist follows, similar to user-user follows. Stored in Postgres database.
1.a. User-Artist Relationship Table
This table tracks which artists a user follows and will be stored in the Postgres database. It borrows from ListenBrainz’s existing public.user_relationship table which we use to track user-to-user follows. I don’t think I have much to explain here. Here are the sample SQL statements that I think make sense, taken mostly from existing code,
-- admin/sql/create_types.sql
CREATE TYPE user_artist_relationship_enum AS ENUM('follow');
-- admin/sql/create_tables.sql
CREATE TABLE user_artist_relationship (
-- user -> artist not artist -> user
user_id INTEGER NOT NULL,
artist_mbid UUID NOT NULL,
relationship_type user_artist_relationship_enum NOT NULL,
created TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW()
);
-- admin/sql/create_primary_keys.sql
ALTER TABLE user_artist_relationship
ADD CONSTRAINT user_artist_relationship_pkey
PRIMARY KEY (user_id, artist_mbid, relationship_type);
-- admin/sql/create_foreign_keys.sql
ALTER TABLE user_artist_relationship
ADD CONSTRAINT user_artist_relationship_user_id_foreign_key
FOREIGN KEY (user_id) REFERENCES "user" (id) ON DELETE CASCADE;
-- admin/sql/create_indexes.sql
CREATE INDEX user_id_user_artist_relationsip_ndx ON user_artist_relationship (user_id);
CREATE INDEX artist_mbid_user_artist_relationship_ndx ON user_artist_relationship (artist_mbid);
1.b. Event Metadata Cache
This table holds the core event data denormalized from the various different MB event tables. This will be stored in the Timescale database. To maintain schema flexibility for posterity sake, dynamic attributes (like relationships, tags, and aliases) will be stored in the event_data JSONB column.
-- admin/timescale/create_tables.sql
CREATE TABLE mapping.mb_event_cache (
dirty BOOLEAN DEFAULT FALSE,
last_updated TIMESTAMPTZ NOT NULL DEFAULT NOW(),
event_mbid UUID NOT NULL,
event_id INTEGER NOT NULL,
event_name TEXT NOT NULL,
begin_date_year SMALLINT,
begin_date_month SMALLINT,
begin_date_day SMALLINT,
end_date_year SMALLINT,
end_date_month SMALLINT,
end_date_day SMALLINT,
event_time TIMESTAMPTZ,
cancelled BOOLEAN NOT NULL DEFAULT FALSE,
ended BOOLEAN NOT NULL DEFAULT FALSE,
event_type_gid UUID,
event_art_presence TEXT NOT NULL DEFAULT 'absent',
place_id UUID,
place_name TEXT,
area_id UUID[],
event_data JSONB NOT NULL DEFAULT '{}'::jsonb
);
-- admin/timescale/create_primary_keys.sql
ALTER TABLE mapping.mb_event_cache
ADD CONSTRAINT mb_event_cache_pkey PRIMARY KEY (event_mbid);
-- admin/timescale/create_indexes.sql
CREATE UNIQUE INDEX mb_event_cache_idx_event_id ON mapping.mb_event_cache (event_id);
CREATE INDEX mb_event_cache_idx_date ON mapping.mb_event_cache (begin_date_year, begin_date_month, begin_date_day);
CREATE INDEX mb_event_cache_idx_dirty ON mapping.mb_event_cache (dirty);
1.c. Event-Artist Edge Cache
This is the intersection table that links events to artists. This is crucial for finding the events that a specific artist is associated to in a reasonable amount of time as by separating this from the main event cache, we allow the backend to quickly execute SELECT queries across many artists simultaneously.
-- admin/timescale/create_tables.sql
CREATE TABLE mapping.mb_event_artist_cache (
event_mbid UUID NOT NULL,
event_id INTEGER NOT NULL,
artist_mbid UUID NOT NULL,
artist_id INTEGER NOT NULL,
link_id INTEGER NOT NULL,
link_type_gid UUID NOT NULL,
link_type_name TEXT NOT NULL,
relationship_data JSONB NOT NULL DEFAULT '{}'::jsonb,
last_updated TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- admin/timescale/create_primary_keys.sql
ALTER TABLE mapping.mb_event_artist_cache
ADD CONSTRAINT mb_event_artist_cache_pkey
PRIMARY KEY (event_id, artist_id, link_id);
-- admin/timescale/create_indexes.sql
CREATE INDEX mb_event_artist_cache_idx_artist_id_event_id ON mapping.mb_event_artist_cache (artist_id, event_id);
CREATE INDEX mb_event_artist_cache_idx_artist_mbid ON mapping.mb_event_artist_cache (artist_mbid);
CREATE INDEX mb_event_artist_cache_idx_event_id ON mapping.mb_event_artist_cache (event_id);
CREATE INDEX mb_event_artist_cache_idx_link_type_gid ON mapping.mb_event_artist_cache (link_type_gid);
1.d. Performer Filtering
While going through the possible values of link_type_gid in l_artist_event I noticed it also includes hosts, conductors and the like. I think it might be worth taking a look at if we want to use such values, i.e. would a user want a notification if the host is someone they follow? Or would they only want one if the main performer is an artist they follow. While I think this could be changed into a whole settings panel in the future, this is probably out-of-scope for this proposal. But, we might want to explicitly include the GID’s we want. Depends on how they are used, and how common they are really.
The ingestion cron job (integrated into mb_cache_base.py) could explicitly filter edges using specific link_type_gid values to ensure we only cache actual performers.
- Main Performer:
936c7c95-3156-3889-a062-8a0cd57f8946 - Support Act:
492a850e-97eb-306a-a85e-4b6d98527796 - Guest Performer:
292df906-98a6-307e-86e8-df01a579a321
1.e. Event Interaction Table
This is mainly to allow users to subscribe to notifications for a singular event, without following the artist. It could also be nice displaying how many people are “watching” an event, as well. I have grappled between deciding if this should be calling “watching”, or “attending”, though the latter might be a bit of a misnomer. I might be interested but be unable to buy tickets for example, though I am not sure which is more relevant.
I am very open to suggestions for this. Also, this is not my own idea–I saw it in other proposals, and liked the idea, so I picked it up. Though, I do think “Interested / Going” isn’t necessarily the right way to approach this.
Either way here’s the schema and more–it will be stored in the Postgres database as well, undering the public schema.
-- admin/sql/create_types.sql
CREATE TYPE event_interaction_enum AS ENUM('watch');
-- admin/sql/create_tables.sql
CREATE TABLE event_interaction (
user_id INTEGER NOT NULL, -- FK to "user".id
event_mbid UUID NOT NULL,
interaction_type event_interaction_enum NOT NULL,
created TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW()
);
-- admin/sql/create_primary_keys.sql
ALTER TABLE event_interaction
ADD CONSTRAINT event_interaction_pkey
PRIMARY KEY (user_id, event_mbid, interaction_type);
-- admin/sql/create_foreign_keys.sql
ALTER TABLE event_interaction
ADD CONSTRAINT event_interaction_user_id_foreign_key
FOREIGN KEY (user_id) REFERENCES "user" (id) ON DELETE CASCADE;
-- admin/sql/create_indexes.sql
CREATE INDEX user_id_event_interaction_ndx ON event_interaction (user_id);
CREATE INDEX event_mbid_event_interaction_ndx ON event_interaction (event_mbid);
Some sample code on relevant functions for this,
# not sure where these would go yet, probably API endpoints
def watch_event(db_conn, user_id: int, event_mbid: str) -> None:
query = """
INSERT INTO event_interaction (user_id, event_mbid, interaction_type)
VALUES (%s, %s, 'watch')
"""
db_conn.execute(query, (user_id, event_mbid))
def unwatch_event(db_conn, user_id, event_mbid) -> None:
query = """
DELETE FROM event_interaction
WHERE user_id = %s
AND event_mbid = %s # GSoC Application – MetaBrainz
AND interaction_type = 'watch'
"""
db_conn.execute(query, (user_id, event_mbid))
def get_watched_events(db_conn, user_id, limit = 50, offset = 0):
query = """
SELECT event_mbid::TEXT
FROM event_interaction
WHERE user_id = %s AND interaction_type = 'watch'
ORDER BY created DESC
LIMIT %s OFFSET %s
"""
return results = db_conn.execute(query, (user_id, limit, offset)).fetchall()
1.f. A Tale of Two Databases
Due to LB having both a Postgres and Timescale database, we must first query the Postgres database for a user’s followed artists, and then query Timescale using this data. This isn’t ideal for this specific use-case (because SQL JOINs would be much more efficient), but this is the standard architecture in the codebase, and I think it makes sense to keep it that way assuming we will use followed artists for other features. It might just have been a bad design decision that we are reinforcing, but I am not sure. Here is a sample of how the code might look,
# db/user_artist_relationship.py
def get_followed_artist_mbids(db_conn, user_id):
query = """
SELECT artist_mbid::TEXT
FROM public.user_artist_relationship
WHERE user_id = %s AND relationship_type = 'follow'
"""
return db_conn.execute(query, (user_id,)).fetchall()
# db/event_feed.py
def get_upcoming_events_for_artists(ts_conn, artist_mbids, start_date, end_date):
if not artist_mbids:
return []
query = """
SELECT ec.*
FROM mapping.mb_event_artist_cache eac
JOIN mapping.mb_event_cache ec
ON ec.event_id = eac.event_id
WHERE eac.artist_mbid = ANY(%s)
AND ec.cancelled = false
AND make_event_date(ec.begin_date_year, ec.begin_date_month, ec.begin_date_day)
BETWEEN %s AND %s
ORDER BY ec.begin_date_year, ec.begin_date_month, ec.begin_date_day, ec.event_time
"""
return ts_conn.execute(query, (artist_mbids, start_date, end_date)).fetchall()
As a side note, I did glance at the other proposals, and no one seems to mention this? Maybe I missed it at first glance, or maybe I am over complicating things, I am truly not certain.
2. Frontend
As a disclaimer, I am not very good at creating mockups. Thankfully, this proposal isn’t quite as mockup heavy as the rest (I think, at least). Most of these mockups were made by editing the elements in place, and will be similar to already existing pages. This doesn’t have to be the case in the final implementation, but for mockups, I think it’s fine.
Themockups are also a bit lacking admittedly, because I lost my final set of mockups, so I have had to re-create them in a hurry. I will try to make them better if I find extra time after smoothing out all else in the proposal. Based on various comments by Aerozol, it doesn’t seem mockups are the make-or-break of a proposal, and well, seems fair to me. ![]()
2.a. Events Explorer page
As mentioned in the idea itself, I think an Events Explorer page is warranted to help users discover events near them. As much as I hate putting it there, it would probably best be made a card on the Explore page, similar to other fun and quirky LB features. Though, it might be worth considering if it should be somewhere more visible so people use it more.
The current mockup is a blatant rip-off of the Fresh Releases page, though admittedly, it does seem adequately suited for the task.

2.b. Events page
I think an Events page inspired by the Album or Artist page should suffice. I had a mockup for this, but now it’s gone, and I am gonna remake it, but till then please imagine fancy shmancy event art, links to tickets, and the homepage for the event and just general information! Yeah, this sucks.
2.c. Artist page
The existing Artist pages can be modified to include a follow button, and I have chosen adding a small icon in the cluster of icons to the top right of the screen. A dedicated Follow button might also be considered similar to the pre-existing Radio button, but I personally prefer the minimal design, though please, let me know.

The Artist pages also need to display upcoming and / or past events for the artist, and I think it’s probably best suited under EPs. I had initially made it under the Singles collection as well, but that prevents the user from seeing it at first glance, and they might not even realize it’s there! It could also just be it’s own thing, instead of being shoved into the DIscography section.

2.d. Events search
Events need to be displayed in the existing Search functionality.

2.e. Explore page
The Explore page needs a new card to lead to the Events Explorer page. Not a big thing, but worth noting, I guess.

2.f. Feed notifications
The notifications shared on the user’s feed will look much like a recommended track, at least I think it should. I will add a mockup for this bad boy as soon as possible, as well.
3. API Endpoints
I also need to implement the API endpoints to make sure the frontend can actually access this information, of course. These endpoints will be decorated by @crossdomain, @ratelimit(), and validate_auth_header()–the usual really. Here is a non-exhaustive list,
| Blueprint | Route | Method | What it do |
|---|---|---|---|
social_api_bp |
/1/user/<user_name>/follow-artist |
POST |
Follow. |
social_api_bp |
/1/user/<user_name>/follow-artist/<artist_mbid> |
DELETE |
Unfollow. |
event_api_bp |
/1/user/<user_name>/events/followed-artists |
GET |
Fetches personalized events feed. |
explore_api_bp |
/1/explore/events |
GET |
Fetches upcoming events. Essentially global events feed. |
metadata_api_bp |
/1/metadata/event/<event_mbid> |
GET |
Fetches single event details. |
Sample code for the artist follow feature,
# listenbrainz/webserver/views/social_api.py
@social_api_bp.post("/user/<mb_username:user_name>/follow-artist")
@crossdomain
@ratelimit()
def follow_artist(user_name):
current_user = validate_auth_header()
if user_name != current_user["musicbrainz_id"]:
raise APIForbidden("Cannot modify another user's followed artists.")
req_json = orjson.loads(request.data)
artist_mbid = req_json.get("artist_mbid")
insert_artist_follow(db_conn, current_user["id"], artist_mbid)
return jsonify({"status": "ok"})
A lot of the Python code is sprinkled here and there, based on the architecture it depends on and such, hence this section is a little barren.
4. The Search Begins
The changes to the search functionality are rather trivial, as the MB API handles most of it for us! Similar to artist or album lookups, we can just define a utility function in APIService.ts, and define a component to parse and handle this data.
We can just work off of the pre-existing AlbumSearch.tsx, and modify it to fit events. The event art can be fetched from Event Art Archive, similar to how it is already being done with the CAA. If it’s not present, we can use a generic image. Here’s an example.
// frontend/js/src/utils/APIService.ts
eventLookup = async (
searchQuery: string,
offset: number = 0,
count: number = 25
): Promise<EventTypeSearchResult> => {
const url = `${this.MBBaseURI}/event?query=${encodeURIComponent(
searchQuery
)}&fmt=json&offset=${offset}&limit=${count}`;
const response = await fetch(url);
await this.checkStatus(response);
return response.json();
};
5. Notifications
Notifications–in the context of my proposal–mainly includes sending notifications via the user’s timeline feed.
In my opinion, e-mail notifications, or anything more than in-feed notifications is entirely out-of-scope. Not because it’s technically infeasible, but rather because this isn’t really how LB seems to do notifications. I say this, not as a dev, but as a user, as in my months using LB, I have never received any mails—not even the YIM one! Now, if there is a push to send more mails, say about new releases from followed artist, weekly playlist suggestions, the like, it would probably be best to handle it separately outside of this as it’s own little thing.
5.a. Song Recommendations
Bear with me here, and take this detour, so we can talk about how the existing system works. When a user recommends a song, only the MBID is stored in public.user_timeline_event table as JSONB. When the user loads their feed, the cached data for these is retrieved via fetch_track_metadata_for_items(...). This is miracle work of design (rather, I was just quite amazed at how well this works), and will be pretty much exactly what I adopt for event notifications.
5.b. Event Notifications
To adapt this pattern, we need the following, give or take:
-- new admin/sql/updates/ update script
ALTER TYPE user_timeline_event_type_enum
ADD VALUE 'event_notification' AFTER 'thanks';
ALTER TYPE hide_user_timeline_event_type_enum
ADD VALUE 'event_notification';
Here’s some sample code for the Python side of things,
# listenbrainz/db/model/user_timeline_event.py
class EventNotificationMetadata(BaseModel):
event_mbid: constr(min_length=1)
artist_mbids: list[str]
class APIEventNotificationEvent(BaseModel):
event_mbid: str
event_name: str
begin_date_year: Optional[int]
begin_date_month: Optional[int]
begin_date_day: Optional[int]
artist_names: list[str] # <- resolved names
event_art_url: Optional[str]
These are ripped-off straight from create_user_track_recommendation_event(...) and get_recording_recommendation_events_for_feed(...),
# listenbrainz/db/user_timeline_event.py
def create_event_notification(db_conn, user_id, metadata):
return create_user_timeline_event(
db_conn, user_id,
event_type=UserTimelineEventType.EVENT_NOTIFICATION,
metadata=metadata
)
def get_event_notifications_for_feed(db_conn, user_ids, min_ts, max_ts, count):
return get_user_timeline_events(
db_conn, user_ids,
event_type=UserTimelineEventType.EVENT_NOTIFICATION,
min_ts=min_ts, max_ts=max_ts, count=count
)
5.c. Enrichment and Scheduling
This section is the lesser known cousin of Crime and Punishment.
There needs to be a function that mirrors fetch_track_metadata_for_items(...) but for events. When the feed is loaded, we take the stored list of event_mbid and look them up in the Timescale event cache to get the actual metadata.
# listenbrainz/webserver/views/user_timeline_event_api.py
def get_event_notification_events(user, min_ts, max_ts, count):
...
We would then need to call this in get_feed_events_for_user(), similar to the rest, as far as I can tell.
Now, we need to set up a cron job to actually create these events, something similar to daily_jams.py,
# listenbrainz/cron/event_notifications.py
def run_event_notifications(db_conn, ts_conn):
last_run = get_last_run_timestamp("event_notifications_last_run")
new_events = get_events_ingested_since(ts_conn, last_run)
# ^ to prevent duplicate notifications
# basically last_updated > last_run
for event in new_events:
followers = get_users_following_artists(db_conn, event["artist_mbids"])
watchers = get_users_watching_event(db_conn, event["event_mbid"])
all_users = deduplicate(followers + watchers)
for user in all_users:
create_event_notification(db_conn, user["id"], EventNotificationMetadata(
event_mbid=event["event_mbid"],
artist_mbids=event["artist_mbids"]
))
save_last_run_timestanp("event_notifications_last_run")
6. User Area
Since LB-1549 is not yet implemented, we can at least temporarily, allow the user to input their location, exactly how it is done in MB, and store it separately in LB. This can later be replaced by MeB lookups, once implemented.
Till then, we can just add a column to store the location MBID in the public.user table,
-- new admin/sql/updates/ command
ALTER TABLE user_setting ADD COLUMN location_mbid UUID;
Now we must tackle how we actually figure out if an event’s area matches the user’s. This turned out to be more of a challenge than I initially imagined, and while I am still unhappy with the solution, this is all I got.
While creating the cache, we must check the event’s area, and find all of it’s parent area MBIDs, and store those in the entry as well. This allows us to check if the user’s location MBID is present in that array, and if it is, we can show it to the user.
^ I don’t like the above solution, and I can’t think of a better one, so it’s getting the axe.
We can use the MB API again, to outsource our work and find out events near an area. This is particularly useful for the Events Explorer page. Here is an example query for LA, to illustrate my point.
This mockup is stolen straight from the LB mockups, not made by me, but it looks great. ![]()
And kudos to MrKomodoDragon for bringing this to my attention.
7. MBID Mapping
I want to expand on this section, but for now, I will provide a small list of the files that need to be changed, and what they would need to be based on. I will apologize preemptively for all the hand waving.
Firstly a mb_event_metadata_cache.py file would be warranted, worked off of the pre-existing mb_artist_metadata_cache.py file.
We would need to implement something like,
class MusicBrainzEventMetadataCache(MusicBrainzEntityMetadataCache):
...
We also need to update the cron jobs in manage_cron.py and also the manage.py file for the CLI access, though that probably comes a bit later.
8. Testing and Documentation
In my opinion, documentation is the most important part of the project. I know how difficult it is to return to something not very well documented, if at all, and try to make changes. (You didn’t think I stopped maintaining Nightlio for no reason, now did you? Mistakes were made.) Even if I only get half of what I have promised done, having documentation will at least allow someone else to pick up the work in the future. So, I think a weekly, or fortnightly schedule for writing documentation would be best. This allows for essentially rest days, where I can chill and write about what I have done the entire week, and document the technical aspects, and I have had good experiences with it.
Timeline
I have roughly 25–30 hours a week to work with–2 hours on weekdays, and two 8-hour weekend shifts. Except during exams (June 10–24), where I’ll be down to about 10 hours a week, tops. Documentation gets its own dedicated time every couple of weeks, as I mentioned in Section 8–think of them as “rest days” where I write about what I just built rather than building more.
Community Bonding Period
A normal person would probably say “Get to know the mentors” and “Talk to the community”, but I have already done most of that, so I will focus on the following,
- Iron out all the details, every single little wrinkle left
- Set up the databases, and double and triple check them
- Refine the UI elements, and mess around if I feel like it
- Mostly continue being me, and interacting in the community, and opening PRs, and wrap up the old ones, and try to get a grip on how it feels to be spending 25-30 hours a week
Coding Period
Week 1 — Timescale Schema
- Set up
mapping.mb_event_cacheandmapping.mb_event_artist_cacheon Timescale - Write and run the Timescale migration scripts
- Add tests
- Deliverable: New Timescale cache table built, but not used
Week 2 — Cache Builder
- This is the big one. Scaffold
mb_event_metadata_cache.pyoff ofMusicBrainzEntityMetadataCache, and implement the actual ingestion logic-querying MB, filtering performers bylink_type_gid(main performer, support act, guest performer as listed in Section 1.d), handling dirty flags, the whole nine yards - Add it into
manage_cron.pyandmanage.pyfor CLI access - Write
db/event_feed.py–get_upcoming_events_for_artists,get_events_by_area - I won’t sugarcoat it, this is going to be a dense week. But I’d rather front-load the pain here than drag it out, because exams
- Deliverable: Working event cache builder that ingests events and artist-event edges from MB into Timescale; complete DB access layer
Week 3 — Postgres Schema
- Set up the
user_artist_relationshiptable. This is basically a copy of the existinguser_relationshiptable with a UUID instead of a second user ID, so it shouldn’t be too bad - Same thing for
event_interaction - Write the migration scripts across the relevant
admin/sql/files - ~10 hrs
- Deliverable: New Postgres tables, ready for use
Week 4 — DB Helpers
- Write
db/user_artist_relationship.py–insert_artist_follow,delete_artist_follow,get_followed_artist_mbids - Write
db/event_interaction.py–watch_event,unwatch_event,get_watched_events - Basic tests for each
- Small, self-contained functions that follow existing patterns, nothing wild
- ~10 hrs
- Deliverable: DB access helpers for artist follows and event interactions, with tests
Week 5 — API Endpoints
- Implement the five endpoints from Section 3: artist follow/unfollow on
social_api_bp, the personalized events feed onevent_api_bp, the global events feed onexplore_api_bp, and single event details onmetadata_api_bp–all with the standard@crossdomain,@ratelimit(), andvalidate_auth_header()decorators - Write API integration tests for each
- Deliverable: All five API endpoints implemented and tested
Week 6 — Search & First Docs
- Implement
eventLookupinAPIService.tshitting the MB WS/2 API - Build
EventSearch.tsxworking off ofAlbumSearch.tsx - Hook up Event Art Archive images with a fallback for when there’s no art
- Add events to the global search UI
- First documentation pass to cover the schema, DB helpers, API contracts, and search integration
- Deliverable: Events show up in global search; documentation batch #1 done; ready for midterm
Midterm Evaluation (July 6 – July 10)
By now: all the tables exist, the cache builder works, all APIs are up, and search is functional. That’s a working backend that can be demo’d on its own, which I think is a good place to be at the halfway mark.
Week 7 — Events Explorer
- Build the Events Explorer page–once again, basically a rip-off of the Fresh Releases page, as I mentioned in Section 2.a
- Add the “Events” card to the Explore page so people can actually find it
- Wire it up to
GET /1/explore/events - Hook up area-based filtering using the MB API (
/ws/2/event?area=...) - Deliverable: Events Explorer page, accessible from the Explore hub, showing upcoming events with area filtering

