GSoC 2018: More detailed integration of AcousticBrainz with MusicBrainz

Tags: #<Tag:0x00007f0507d2f470> #<Tag:0x00007f0507d2f330> #<Tag:0x00007f0507d2f1f0>


A much firm integration with MusicBrainz database and thus allowing users to better understand their data.

Personal Information


The AcousticBrainz project currently relies on using musicbrainzngs, which is a shallow python binding of the MusicBrainz web service to get recording ids and fetch the information of requested entities. python-musicbrainzngs then uses the XML Web Service (which is an interface to MusicBrainz database) which queries and serves the related results.

Recordings in AcousticBrainz are stored based on their MBID from MusicBrainz. As accessing the MusicBrainz data in AcousticBrainz takes a long time to retrieve using present web API, we should be having a tighter integration with MusicBrainz data so that we can access the data in a faster and more efficient manner. If there are many users trying to access the recording information from AcousticBrainz site, there are a lot of requests per page to the web service which take a lot of time and also increases the load on the server. My proposal involves working on accessing MusicBrainz database in AcousticBrainz irrespective of what is being done presently. Directly connecting to MusicBrainz database / importing the MusicBrainz database in a local schema in AB would mean we could directly query the database and thus would result in significant increase in speed with which AB loads.

With a direct access of MusicBrainz data, we can have a very tight integration with MusicBrainz database and can use MusicBrainz data in many more places in AcousticBrainz such as: Giving real-time feedback to users about artist filters while creating datasets, a feature using MBID redirects which doesn’t allow users to add duplicate recordings to a particular class, allow adding recordings based on some common features like same tag, same release or same year, and we can show visualizations and statistics for the MusicBrainz data present in AcousticBrainz.

How to access the MusicBrainz database?

There are two ways of performing the integration with MusicBrainz database (as per the ideas page). The first one is direct connection with MusicBrainz database and another one is copying the relevant information from the MusicBrainz database and saving into a separate schema in AcousticBrainz database.

Directly connecting the MusicBrainz database to AcousticBrainz allows us to run a separate container of the docker image from MusicBrainz-server by importing the MB database dumps in AB.

Copying the data in a separate schema in the AcousticBrainz database would make the data access really fast in comparison with mirroring MusicBrainz directly and the present being used web service as we would be able to do joins between the tables and retrieve the data with a single query in the AcousticBrainz database.

Here, I am going to discuss pros and cons of both the methods of MB database access:

Direct connection with MusicBrainz Database


  • There are a lot of requests per page to the XML web service, and thus much more load on the service. Directly connecting with MusicBrainz database would result in significant increase in speed.
  • With the direct connection, we would not require any time to time update as the database will be synchronized.


  • As both the databases are separate, in order to use their data it is not possible to directly apply joins between the tables, so we have to do one query against both databases.
  • Writing different queries to two databases can lead to slower speed comparatively than replicating the database.

Copy relevant information from MusicBrainz database into a separate schema in AcousticBrainz database


  • There are a lot of requests per page to the XML web service, and thus much more load on the service. Importing the MusicBrainz database in a separate schema in AcousticBrainz database and thus querying directly from AB database would result in much increase in speed.
  • If we are able to import MB database into AB database, we can fetch our data with one query and can apply joins between the tables. Example: If we want low-level data for a recording of a release then we can join the lowlevel_json table of AB database with recording, artist_credit and release table of MB database.


  • The problem here is that we need to update the local schema of MusicBrainz data in AcousticBrainz whenever MB is updated.
  • As we are copying some relevant information from MB database, whenever our need or use case changes, then we have to consider copying the data again.

Implementation details

Setup for the direct connection with MusicBrainz database

New Infrastructure is allowing us to easily read data directly from the MusicBrainz database. To run AcousticBrainz in development, we would connect directly with MusicBrainz database, and the existing docker image of MusicBrainz-server can be used. This service has an option to download the database dumps and import the data into the local AcousticBrainz database. I have started my work on adding a new service, a separate container to AB docker-compose files for development which allows downloading the MB database dumps by establishing a direct connection. I have opened a PR for the same. To run AcousticBrainz in production we would connect directly to the main MusicBrainz database without having to use an additional docker image.

After building the MusicBrainz database development environment in AB, the next step would be fetching the data from the database. I have used mbdata.models to write queries for accessing the data to get SQLAlchemy models mapped to MusicBrainz database tables. I have worked on Recording entity which fetches recording information using mbdata also in this PR.

As quite similar work is being done in CritiqueBrainz, as suggested by my mentor I would write the code accordingly and move the existing code in CB to brainzutils and use the code in AB and thus other MetaBrainz projects can also use it easily.


└── external
    └── musicbrainz_db_direct
        ├── tests
        │   ├──
        │   ├──
        │   └──

The musicbrainz_db_direct module in brainzutils package would contain following functions initially:
    - session()
    - initialize_db()
    - class DataNotFoundException(Exception)
    - get_entities_by_gids(mbid, entity_type)
    - check_duplicate_recording(mbid)
    - add_recording()
    - get_artist_info(mbid)
    - add_recording()
    - check_presence()
    - get_recording()
    - to_dict_recording()

Implementation details to copy relevant information from MusicBrainz database

For copying the MusicBrainz database, as the MB database is big enough it might not be feasible to store a copy of the entire database. We should at least include a title of the recording, recording gid, artist name, artist gid, artist count, release year, release gid, release name, track length, track position and track number. As MusicBrainz database contain around 18 million recordings, the metadata we will store in the local schema in AcousticBrainz will have around 3 million recording information because AB has around 3 million recordings in it’s database.

A proposed structure for directories and files we would have for this method:

└── acousticbrainz-server
    └── db
        └── musicbrainz_external
            ├── tests
            │   ├──
            │   ├──
            │   └──
    └── webserver
        └── views
            ├── test

These files in this structure would contain similar functions as shown above for the previous method but we would use raw SQL to query the database and not mbdata because we don’t have models defined for AB database like we have for MB database.

I would connect to the container of the docker image that I have added to AB in this PR to connect to the MB database. As tables are not in the same database so we couldn’t use direct SQL copy command to copy the metadata. The local schema can be populated using sqlalchemy to first engine to the MusicBrainz database to query the data from by using the docker container and then engine to AcousticBrainz database to store the results by first building a new schema for MB tables in AB database and finally we execute the query to insert the metadata into the tables. We would transfer the data in the form of batches (a bunch of data at a time) and thus not copying the entire table data at once so that it will reduce the load on the server.

I would add a function in for importing metadata into the AB database.
The metadata can be imported using: python import_musicbrainz_data and this command would be added to the DockerFile.

A pseudo code to import the metadata

connect to the docker container of musicbrainz-server present in AB
engine to the MB database to get the metadata from
source_engine = create_engine()
source_session = sessionmaker(source_engine)
engine to AB database to save the tables in
Destination_engine = create_engine()
dest_session = sessionmaker(destination_engine)
Create new schema and new tables in AB database
for data in batches:   # getting a bunch of data at a time
     execute the query to insert the data into the table

for the get_data() function:

get the list of all newly added recording mbids
for each recording mbid:
    look up recording in the database using mbid redirect table if needed
    get the list of artists performing on the recording
    for each artist:
        get gid and name
    get the list of releases
    get gid_redirect information using recording_gid_redirect table
    get track_name, position and length

As we wouldn’t require importing all the tables from MusicBrainz database to use in AcousticBrainz. The local schema should at least contain the tables and columns as shown in this ER diagram:

For downloading the replication packets and apply them on tables, I suppose we have to add all the columns for a table which we require and if we have foreign keys defined in our required tables for some tables then we should also have to include those other tables. The following list of tables we should import:

  • recording
  • artist_credit
  • artist_credit_name
  • artist
  • release
  • track
  • recording_gid_redirect
  • area
  • gender
  • release_group
  • language
  • packaging
  • script

The thing that I propose to do is to find a reasonable compromise between the two methods so that we can have the best of both the methods. We can create a direct connection to the MusicBrainz database while copying only a small subset of the entire MusicBrainz database into the AcousticBrainz schema so that we can use a direct connection in places where we find a new addition of a recording to AB database.
The most important part of the project is to perform tests on both the methods on the scale of AcousticBrainz to see what method works best for us in order to perform the integration of AB database with MB. After developing both the methods, I would implement some test queries using mbdata.models for direct connection and raw SQL for importing the metadata to see whether we are able to get the test queries working. After I have the test queries working, I would perform the experiments on the complete AcousticBrainz database in collaboration with MetaBrainz team in order to decide the best method that fits for our needs. We would decide the best method for us on the basis of many parameters. We could test it on the basis of:

  • Speed
  • Storage
  • Memory usage
  • Webpage rendering time i.e calculating the average time it takes presently to render a web page in AcousticBrainz website and then calculating the time taken by both the methods and then we could decide better after comparing it with the present time.
  • Response time, for example in direct connection if there occurs any problem in MB database then the process would have to wait indefinitely or starvation would occur.

Once we have imported the metadata into a local schema, we would be dealing with 2 steps:

  • Updating the metadata in AcousticBrainz whenever the data in MusicBrainz server is updated
  • Fetching new metadata when a new recording is added to AcousticBrainz

For updating the MB data we have in AB, downloading the replication packets that MB provides will allow us to do a direct mapping from one database to another. Replication packets is a way so that the copy of MusicBrainz database keeps up to date. If we work on keeping the structure of tables exactly same, we could then just look in the replication packet file and check if the item in MusicBrainz server gets updated and if it is present in local schema then copy the data. Previously updated column can then contain the timestamp of the replication packet or it’s sequence number. We could also save space by making it one table to store timestamps for all the tables like the replication_control table in MB database. We would compare the timestamps between the original MB database and the local schema, and whenever we find the timestamp of original database greater than the timestamp we have in our local schema, the local schema will be updated.

After modification of the LoadReplicationChanges script for:

  • skipping UPDATEs and DELETEs for rows not present in AB database.
  • skipping INSERTs for recordings not in AB tables.
  • skipping replication tables not copied in AB database.

we can use the replication script for making the changes in the tables in the local schema of MB database.

For fetching the new metadata whenever a new recording is added to AcousticBrainz database, I would write a script which copies and save only the subset information which we require in our local schema of MusicBrainz database in AcousticBrainz. And then while fetching the data we could first try getting the data from AB’s subset of MB and if an MBID exists in AB and is not present in our metadata tables (i.e there is a new addition to AcousticBrainz database) then we can get from the direct connection to the MB database and save it in the subset (local schema of MB database).

Using MusicBrainz data in AcousticBrainz: How should we use our data in order to get a better understanding?

After implementing the method for database access, the step would be fetching the data and use it in different places in AB. Presently we get recording information using python-MusicBrainz NGS. An example code to fetch recording information for the AB recording page: GET /{uuid:mbid}

import sqlalchemy
from brainzutils import cache

CACHE_TIMEOUT = 86400  # 1 day

def get_recording_by_id(mbid):
    mbid = str(mbid)
    recording = cache.get(mbid)
    if not recording:
        with db.engine.connect() as connection:
                  result = connection.execute(sqlalchemy.text("""
                          SELECT {r.columns},
                            FROM recording AS r
                            JOIN artist_credit AS ac
                              ON =
                            JOIN artist_credit_name AS acn
                              ON acn.artist_credit =
                            JOIN artist AS a
                              ON acn.artist =
                            JOIN artist
                              ON acn.artist =
                            JOIN release
                              ON release.artist_credit =
                            JOIN track
                              ON =
                           WHERE r.gid = :recording_id
                          "recording_id": mbid,
                  if not result.rowcount:
                      raise db.exceptions.NoDataFoundException

                  row = result.fetchone()
                  recording = dict(row)
    cache.set(mbid, recording, time=CACHE_TIMEOUT)
    return recording

Later would be the time to integrate AB database to MB so I would help in performing the integrations and would use the data in many places in AB.

Using MBID redirect information to determine when two distinct MBIDs refer to the same recording

An entity can have more than one MBID. When an entity is merged into another, it’s MBID is redirected to the other entity.
In order to determine the duplicate recordings, I would implement a function which redirects the MBID to its original entity. I would query the recording and recording_gid_redirect table of MB database to use the information in the function which returns the entities with their MBIDs which would then be used to not allow adding duplicate recordings to a class. The function would return a dictionary of entities with keys as their MBIDs. I would maintain a record in a set with original MBIDs of the recordings which have already been added so that whenever there is a new recording we compare its original MBID with the recordings present in the set. And thus with the help of MBID redirects, a user won’t be able to add similar recordings to a particular class because their MBIDs would redirect to the same original entity.

We can implement this method easily using the direct connection to MB database using mbdata.models much similar to what we are doing in CritiqueBrainz here. But only after testing both the methods of database access, we would be able to decide better what method works well for this integration.

Using Artist information in AcousticBrainz

Artist filtering states that one recording per artist should be present in any dataset class so that during evaluation we have unique artists in each class to present the user with a cutoff of class size during training. We can not choose artist filtering in the case when creating a challenge for classifying an artist. If we have a fast MB database access, we can provide users with real-time feedback about artist filters while creating datasets by fetching the MBID of a recording from AB database and applying joins between tables artist and recording.

We may implement the process in this way: Whenever a user adds one recording, we fetch the artist information from artist table and save the name or artist mbid in a set. Now when the user attempts to add any recording with same artist name in that class, we check whether the artist of the recording is already present in the set or not. If it is present, we won’t allow her to add that recording to that class.

Possible extensions

In case I am finished with my GSoC project early, I plan to use the rest of my time with using MusicBrainz data to show statistics

We can widely use MB data in many places in AB. I would like to add statistics and visualizations to the data with the help of charts and graphs. We could add a new view, maybe which will be a place to show sitewide graphs. We could store stats in the jsonb format in statistics table in AB database. When stats are calculated they must be saved in the database entity wise, so that whenever the page is opened again, the stats calculation process is not repeated for the same data and it would fetch the data from the database. We would recalculate/update the statistics on a weekly or fortnightly basis.

I would add charts based on following:

  • Top X Artists by recordings
  • Most frequently submitted recordings
  • Top X submitted recordings by an Artist
  • Most frequently submitted releases over an year
  • Most / least submitted MusicBrainz tags

A graph for most commonly submitted recordings and top 10 artists would look like:

I have used Plotly for demo graphs. We could use plotly.js or Bokeh for data visualizations, depending on the opinions of the community.


A phase by phase timeline of the work to be done is summarized as follows:

  • Community Bonding (April 23 - May 14)
    Spend this time trying to formalize what exactly I need to code and will start setting up the connection with docker container which connects to the MB database for the development environment of AB. Also, discuss design decisions with mentor to make sure that no bad decisions are made early in the process.

  • Phase 1 (May 14 - June 11)
    I aim to complete the method of copying the metadata from MB database into a separate schema in AB database. Use the docker image in dockerfile to connect to the MB database and load the metadata and import the data into a local schema in AB by adding a function to I would also work on writing a script which fetches the data from MusicBrainz database whenever a new MBID is added to AcousticBrainz.

  • Phase 2 (June 12 - July 9)
    In this phase, first I aim to work on updating the local schema of MB database whenever the MusicBrainz server is updated by downloading the replication packets. Perform tests on both the methods on the scale of AcousticBrainz to decide which method works best for us. On the last dates of this phase I will update the AcousticBrainz build documentation.

  • Phase 3 (July 10 - August 6)
    This phase would involve the work on two integrations. I would like to start working on using MBID redirect information to select duplicate recordings first and then I would work on allowing users to add one recording per artist in real time in each class for artist filtering. If I complete early then I would help in using MB data to show statistics in AB.

  • After Summer of Code
    To continue my work on adding more functionalities to AcousticBrainz such as allowing the users to add the recordings to a particular class in the dataset editor based on certain criteria such as by a given Artist, in a given release, same release year or based on a given tag in MusicBrainz and working on other MetaBrainz projects as well. Also, working on new machine learning infrastructure would be a very interesting work to do.

Here is a week by week timeline of my work for summer:

GSoC 1st week (14th May to 20th May): Begin with the dockerfile setup. Connect to the container made for the direct connection which connects to the MusicBrainz database. Start with loading and copying small data first.

GSoC 2nd week and 3rd week (21st May to 3rd June): Writing the script to import the data in batches from MB database to a schema in AB database.

GSoC 4th week (4th June to 10th June): Work on writing the script to fetch new data to local schema whenever new MBID is added to AcousticBrainz database.

GSoC 5th week (11th June to 17th June): Start working on writing scripts for changes in replication packets update and insert feature.

GSoC 6th week (18th June to 24th June): Work on downloading replication packets and set up using cron and update the local schema whenever there is any update in MusicBrainz server.

GSoC 7th week (25th June to 1st July): After developing both the methods, I would work on testing sample queries for both the methods and see if queries are working properly.

GSoC 8th week (25th June to 1st July): Testing both the methods on the scale of AcousticBrainz on the basis of several parameters and thus deciding which method works best for us.

GSoC 9th and 10th week (9th July to 22nd July): Work on using MBID redirect information to don’t allow duplicate recordings to be added to a class.

GSoC 11th week (22nd July to 29th July): Start working on artist information to give users real-time feedback about artist filters.

GSoC 12th week (30th July to 5th August): Complete previous week’s integrations if left with any and solve bugs. Work on documentation for AcousticBrainz website.

GSoC 13th week (6th August to 14th August): Complete if there is any pending stuff and work on final submission and make sure that everything is working fine.

Detailed Information about myself:

I am a Computer Science undergraduate student at National Institute of Technology, Hamirpur. I’ve been helping out in development work in AcousticBrainz since last December. A list of commits and Pull Request's to acousticbrainz-server and acousticbrainz-client can be found here, here and here. The pull requests I’ve worked on over time, most notable ones of which are: MB database image setup in AB and a feature to select SVM parameters preferences for dataset evaluation.

Question: Tell us about the computer(s) you have available for working on your SoC project!

Answer: I have a DELL laptop with an Intel i3 processor and 6 GB RAM, running Ubuntu 16.04.

Question: When did you first start programming?

Answer: I have been programming for 5 years and started when I was in 12th grade and wrote my first code in C++. I picked up python when I started my graduation 3 years back.

Question: What type of music do you listen to?

Answer: : I mostly listen to pop, rock and slow music of artists: Coldplay, The Chainsmokers, Arijit Singh, Arctic Monkeys, One Direction and songs of Ed Sheeran

Question: What aspects of AcousticBrainz interest you the most?

Answer: This is one of the first projects to store low-level data for music and run machine learning jobs on it. This project provides a huge amount of acoustic information of music and low-level and high-level descriptors and the best thing is that it is open to public. Since I am interested to research on Machine Learning in music in which I can use the AcousticBrainz data and thus this project really interests me a lot.

Question: Have you ever used MusicBrainz to tag your files?

Yes, I have been using MusicBrainz Picard to tag my music files.

Question: Have you contributed to other Open Source projects? If so, which projects and can we see some of your code?

I have mainly contributed to AcousticBrainz and I have made small patches to GNOME Music and Photos. Here’s a link to my contributions. I have also worked on many open source and college projects. You may refer to my Github Handle.

Question: What sorts of programming projects have you done on your own time?

I have worked on Summing Up Bot which generates summaries of long texts, Hack-lastfm that dynamically generates statistics and collages for users and worked on Sentiment Analysis on sentences to predict stock markets. I have also worked on an Email-Spam Classifer which uses Support Vector Machines and classifies an email as spam or non-spam and a Reddit Bot which has a feature to fetch top news, jokes and related articles for Reddit users.

Question: How much time are you available, and how would you plan to use it?

I have holidays during most of the coding period of GSoC and would be happy to give 50+ hours per week to my project.

Question: Do you plan to have a job or study during the summer in conjunction with Summer of Code?

No, I have a vacation from my college during the GSoC period and hence no obligations from college. And I won’t be involved in any important work except the Summer of Code project.


This is an initial draft of my GSoC proposal. Feedback and suggestions would be greatly appreciated. :slight_smile:


Thanks for the proposal! The proposal has a good level of detail, and you’ve already made a start on some early parts of the project, which is great to see. I’m going to quote some specific things in the proposal that I think could be changed or made clearer, and then I’ll give some overall notes.

For me, this initial proposal section only says what we can do with the change, it doesn’t say why we want to do it. These proposals should come with a strong motivation to do the task, otherwise we’re just writing code because we want to (that is, why do we want to have tighter integration with the database when we can in theory do exactly the same with the web api?)

These shouldn’t be images, they’re too difficult to talk about and copy/paste from.

Direct connection pro: lots of requests per page with the xml webservice

This is a pro of both approaches, and is the reason why we want to stop using the webservice :slight_smile:

You can make it explicit here that you are referring to development. When we run AcousticBrainz in production we can connect directly to the main MusicBrainz database without having to use an additional docker image.

We had a very similar SoC project last year to connect directly to a MusicBrainz database server for CritiqueBrainz (GSoC 2017: Directly Access the MusicBrainz Database in CritiqueBrainz, We shouldn’t duplicate any work here. It would be a good idea to, as part of this project, move the existing code in CB to our cross-project python library, brainzutils, and use the same code in both CB and AB.

I would make the direct database and local copy options different sections. In each of these sections it would be good to include a description of the method (like you already have), and then also add a description of how you will do this (e.g. you might want to talk about new modules (acousticbrainz.external?) or class names, or workflow - here you should show that you have thought a little bit about how you might write this system. It doesn’t matter if the final version doesn’t look like this, but we’re interested in seeing that you can think about the process in a “big picture” manner)

This also needs more detail. I’m not sure what you mean by dataframe here - are you considering using pandas? I don’t think that this is a good fit. You’ve already had some experience with the sqlalchemy bindings and seem to understand them well. Be clear about what tools you want to use, perhaps you just need to change some terminology.

Be explicit here. I want to know what the command should be called, and a simple psuedo code description of how it works, e.g.

get list of all newly added recording mbids
for each recording mbid:
    look up recording in database, using mbid redirect table if needed
    get list of artists performing on recording
    for each artist...

Some time ago I made a proposal of a schema for this metadata:
The difference here is that I also include release group, and don’t include track or medium. We should continue discussing this to see what the ideal tables to include are. I recommend that you talk with @reosarevok and @murdos to see if they have suggestions, they always bring up interesting points when asked a question like this.

This type of code is not necessary for the proposal

This is a very important part of this project. You should have an entire section in the proposal talking about the evaluation. We want to know at least

  • How you will do the evaluation
  • a suggestion on how you will decide what is the best system

You asked in a previous forum post (and I didn’t answer, sorry), and this is very important. You correctly realised that we need a lot of data to replicate the size of the current acousticbrainz server. I suggest that you develop the two methods of getting metadata, and implement some test queries. When you have the test queries working, MetaBrainz can provide you with a virtual machine that contains a complete copy of the AcousticBrainz database, and we can run the experiments there.

I can tell you now that you won’t be able to finish both the integration of two methods of getting data, testing which one to use, and all of these implementations of the data. I recommend that you choose only two of the four proposals that you have listed here.
My favourite options are using artist information for doing Artist Filtering in datasets, and for determining redirect MBIDs to see what submissions are the same. However, you’re welcome to choose whichever items you prefer. Note that things like the statistics may require information from the duplicate recordings task, and so you would have to do that one first.

In general I like the overall goal of this proposal, but it needs to be improved to show two main things:

  1. You need to explain not only what you want to do, but why. Start each section with a stand-alone sentence explaining the goal of this section, and then continue with the text that you already have explaining how you think you will do it
  2. Include more detail in the “how you will do it” sections. Here you should show to us that you have already looked at the acousticbrainz server and understand the data enough to start making proposals using module names, function names, or psuedo code. You should show that you’ve potentially found issues in the data or process (don’t worry if you don’t know how to solve these, we can help with that).

Thanks again for the proposal, I look forward to reading a revised version!


This information is stored in a table in the MusicBrainz database. If the database is already going to be queried directly or fully or partially mirrored, why not just also query/mirror the information in this table? (There might be a perfectly good reason, I just don’t know it. :slight_smile: )


The idea is to use MB database tables for implementing a function which redirects MBID to its original entity. I would query the recording and recording_gid_redirect table (let’s say for entity type as recording) of MB database to use the information in the function which returns the entities with their MBIDs which would then be used to not allow adding duplicate recordings to a class. I would also add this detail to MBID redirect section of the proposal. Thank you!


Thanks for the review, @alastairp

Thank you! :slight_smile:

I have updated the proposal accordingly.

Changed the pros and cons from image format to plain text.

I have updated the proposal accordingly.

I have a query regarding the suitable tables we should import:

As we require tables according to our needs in AcousticBrainz, I have added to the proposal the ideal tables we should have. (along with the tables whose foreign keys we have in our required tables)
As the tables have foreign keys for some other tables, would it be ideal to include those other tables as well though we won’t be using the data of those tables?
For example: for release table, we have foreign keys for tables:- language, release_packaging etc. So should we be including these other tables as well? Because replication packets might work only on the basis of the present structure of MusicBrainz tables.
Or would it be possible to skip those columns of the tables which we don’t require to use in AB? Looking forward to your feedback. :slight_smile: