Getting Started

Must be a problem with my silly metric - maybe leave that out. I think you can use dice (with iou=true) to get the proper jacquard index anyway. Sorry about that!

1 Like

Thanks for your reply. I will try that out

hello there, @daveluo_gfdrr
first of all thanks for your great efforts,
here : https://colab.research.google.com/github/daveluo/zanzibar-aerial-mapping/blob/master/geo_fastai_tutorial01_public_v1.ipynb#scrollTo=1RC5HvDwEiuJ

i noticed that you used a zoom_level=19 when creating 256px tiles, what is the relation between the zoom_level and size of the tile? if i want to create 512px tiles, what zoom_level is to be chosen?
what zoom_level was chosen to create the 1024px test images?

1 Like

Hi @Hasan_N,

The zoom_levels at 256x256 correspond to these meters/pixel at the equator: https://wiki.openstreetmap.org/wiki/Zoom_levels

E.g. for zoom_level=19 and tile_size=256, it’s ~0.3m/pixel. If you increase tile_size to 512x512 at the same zoom, then m/pixel would be halved to ~0.15m/pixel because the area covered by each tile polygon at each zoom level remains the same.

In that tutorial, rio-tiler uses rasterio’s WarpedVRT under the hood to resample raster images from their native resolutions and projections to these standardized web map tiles but that’s not the only way to create training chips as mentioned in my earlier post.

Dave

Hi Dave @daveluo_gfdrr, Hello all,

I opened Accra 665946 scene using ERDAS and it shows that pixel size is 0.02 meters while in your official webpage description you mentioned that the resolution for this scene is 3 cm:

Using gdalinfo, I found the following:
Pixel Size = (0.020015187071028,-0.020015187071028).

Can you please confirm which one of those three (ERDAS, GDAL or your webpage) is accurate?

Ali

1 Like

Hi @aghandour, thanks for looking into it. Yes you’re right, that scene should be listed as 2cm resolution. We’ll correct that on the site.

Hi all,
here is the full list of train_taier_1 images resolutions (i hope it will be usfull for someone):
acc
665946: 0.02001518707102818
a42435: 0.032029411960186015
ca041a: 0.035820209694930036
d41d81: 0.05179965064903244
mon
401175: 0.07879995472604662
493701: 0.0420000000000015
207cc7: 0.0777476098863814
f15272: 0.039999999999998634
ptn
abe1a3: 0.2
f49f31: 0.2
kam
4e7c7f: 0.03533483541714589
dar
a017f9: 0.04162000000000368
b15fce: 0.05071999999999411
353093: 0.04833999999999547
f883a0: 0.05113
42f235: 0.07266
0a4c40: 0.07222
znz
33cae6: 0.0774800032377243
3b20d4: 0.06792999804019927
076995: 0.06411000341176987
75cdfa: 0.06035000085830688
9b8638: 0.06646999716758728
06f252: 0.059179998934268944
c7415c: 0.06451000273227692
aee7fd: 0.07398000359535216
3f8360: 0.07880999892950058
425403: 0.07528000324964522
bd5c14: 0.07842999696731566
e52478: 0.0720999985933304
bc32f1: 0.07466000318527222
nia
825a50: 0.1007506920751152

Resolutions of test images are removed, as i can see.

2 Likes

Hi everyone,

I think there was a question about how to create training image & label chips with windowed reads at native resolution. Here is some new starter code I added to the bottom of the notebook “A quick intro to accessing Open Cities AI Challenge data…” to show how you can use rasterio’s rasterize() and geopandas to do that:

Dave

1 Like

To what extent is the degree of the labeling error in tier1 data?


I marked the boundaries in orange , is this alignment error normal ?

Yes, that’s within the expected range of alignment error for tier1 data. Tier 1 doesn’t necessarily mean the labels are perfect, but that they’re relatively much higher quality and more immediately useful for model training than tier 2. We try to provide as high quality labels as possible to the extent of what participatory mapping teams have done per city/region but there will always be some label noise. This is to be expected in real-world applications so solutions should be prepared to work with it.

1 Like

@daveluo_gfdrr thank you a lot.

1 Like

Hi @daveluo_gfdrr, Hi all,

I am wondering how to transform the lat/long coordinates of a specific building polygon into pixel coordinates relative to the top left corner of a tiled window.

Ali

For the Driven data Semantic Segmentation challenge, code using Keras segmentation-models library of - GitHub (https://github.com/qubvel/segmentation_models)

Starter code: https://github.com/anindabitm/Open-Cities-AI-Challenge-Segmenting-Buildings-for-Disaster-Resilience/blob/master/Open_cities_challenge_Keras.ipynb

Credits - johnowhitaker for fantastic starter code help

3 Likes