..

Moved Supabase tables to private schema

For Creo, we made the decision to use Supabase, mainly because of its Auth. Slowly, we started using the database as well and ended up having ~20 tables in the public schema. We did not enable RLS (Row Level Security) for a long time, even though it kept complaining about it. This was mainly because I wasn’t aware of how Supabase works and who it is for. Later, when I realized that, we just didn’t have enough time to fix it. This task remained in my todos for the things to do before a public launch (No pun intended).

And Supabase publishes all the tables in the public schema via the REST APIs. This was something I didn’t want to do.

The following options were there:

  1. Enable RLS.
  2. Move to a different Postgres provider.
  3. Move the table to a different schema (e.g., private).

1 - It was not possible to enable RLS as there were tables that didn’t have any user data. Also, since we had our own backend, even the user-related data was something I didn’t want to publish publicly via the APIs (even though they can access what they own) as this would publish the structure of the tables - the lesser I share, the better. No need, don’t share. “Principle of least privilege or the Need-to-know principle.”

2 - We had a trigger on auth.users table (which is a Supabase-owned table for managed Auth). The job of this trigger was to INSERT into our users’ table (public.users). We created this table to store users’ metadata. It would have been cumbersome to switch to a different provider, and we might have needed a webhook handler, among other things, for this.

3 - This is what we ended up doing. We moved ALL the tables from the public schema to the private schema. You can create a new schema with any name; I just called it private. Anything that’s not called public will not expose the data via the APIs.

Supabase doesn’t provide a way to completely disable the REST APIs on the public schema. The only way left was to do this or to write RLS policies, which didn’t make any sense as I didn’t want to publish anything via the Supabase APIs. Furthermore, we have our own backend already, which doesn’t need any APIs from Supabase as it connects directly to the DB.

A whitelist for the schema (with the option to remove public) from it would have been a really nice feature. But anyways.


These are the migration steps we followed, something that could be useful to someone:

  1. Create a private schema from the Supabase dashboard.
  2. Run the DDL migration on the private schema.
  3. Export DML from the public schema; we just needed the COPY and SEQ sections.
    $ pg_dump -h {host} -U {user} -n public --data-only postgres > prod_raw_dump.sql
    
  4. Remove anything that’s not needed to be added, set. We just needed the COPY and SEQ sections.
  5. In the exported DML, rename the public schema to private.
  6. Run this .sql on the private schema:
    $ psql -h {host} -U {user} -d {db} -f prod_new_dump.sql -v ON_ERROR_STOP=1 --set=schema_name=private
    
  7. Delete the auth.users triggers that were inserting into public.users.
  8. Delete all the tables, etc., from the public schema.

I run raw SQL DDL migrations using Postgrator and felt safer to do DDL first. You may not need to do the same; you can export the entire schema, rename the schema in the exported file, and just apply them to the private schema.


Types of Supabase users when it comes to the DB:

  1. Those who don’t have their own backend.
  2. Those who have their own backend.

Supabase is well-suited for 1 - it’s their primary focus. You create your tables, and your web client can query the DB just by calling an API. The access that users have is solely determined by how you’ve set up the RLS.

2 - We fall into this category; we had our own backend that was connected to the DB directly, so we didn’t need any REST APIs from Supabase as a secondary layer.