What does AP mean?
Something like that, yeah. A federation of servers where hyperlocal data reside, maybe even maintained by local councils
Ok I see. That is an intriguing idea indeed!
Are we talking about end-user federation or node-to-node federation?
I note that end users are rarely direct consumers of #openstreetmap data.
“The Map” is the huge #Postgres database serving the data or alternatively, the planet.osm dump or its diffs.
This data is served via APIs to various consumers and also used to produce raster tiles (via another DB) - 1/3
@Ca_Gi @Antanicus @feonixrift @rory
Consumers usually receive:
* Raster tiles (e.g., https://openstreetmap.org/ visitors)
* Data from various APIs (e.g., #Nominatim, #Overpass users)
* Vendor-specific data bundles (e.g., #OsmAnd, #Maps.me)
@rory can you confirm my understanding of the above is reasonably accurate? - 3/3
In theory this could also be done via #Postgres replication, but with less control by admins as to what gets replicated. You probably do not want someone's DROP DATABASE to replicate across all instances. 🙂
Why do you say it's hard to federate? Planet.osm allows anyone to run a full instance, reasonably well synchronised with the main OSM server.
Is it not possible that the core OSM setup (Postgres + Mapnik or whatever covers your needs) looks hard to install and maintain and that's why there aren't that many known instances about? I don't know, just speculating.
I don't have any specific notes. I don't think there was a lot of specific problems. It didn't take me too long.
Then again, I have a lot experience installing web applications and unix services, so it might have been easy for me for that reason! 😆
I just ran it in "foreground" debug mode, which was "good enough" for simple usage. Didn't do any email stuff either.
@Antanicus @Ca_Gi @feonixrift
AFAIU that's correct. Any changes that you do want to share would be pushed onto the main instance and changes you want to keep local would be applied directly to your database.
A possible improvement (but something like this may already exist) would be to have instances broadcast local changes to other federated nodes in real time. This way there wouldn't necessarily be a single authoritative instance. This may or may not be desirable.
@61 Ok, now it's more clear. Yeah, I like the idea. Different Instances that keep a copy of the main, basic data (that will be the same for every Instance - maybe the sync process could be verified via a blockchain verification system). And nearby a p2p federation that shares local changes
@61 you are correct
#OSM uses many communication channels (IRC, mailing lists, slack, twitter, telegram, mastodon (ie here)), the OSM user diaries, OSM private messages. In theory you could try to "federate" them, but they are a tiny tiny tiny part of OSM, and nothing to do with edits, or the map data. So I'm not sure what benefit it would bring? And the main thing (the data) would still be centralized.
There is a proposal to remove node id and replace them with some sort of hash (partially for performance). Change OSM data mode can take years and years though (inertia!). We still don't have a (proper) area type.
@rory so _theoretically_ a decentralized OSM is possible.... Very interesting stuff!
Besides, it would be a terrible solution anyway. As you mentioned and as git has proven since 2005-04-07¹, hashes are the way to go.