Must be a problem with my silly metric - maybe leave that out. I think you can use dice (with iou=true) to get the proper jacquard index anyway. Sorry about that!
Thanks for your reply. I will try that out
hello there, @daveluo_gfdrr
first of all thanks for your great efforts,
here : https://colab.research.google.com/github/daveluo/zanzibar-aerial-mapping/blob/master/geo_fastai_tutorial01_public_v1.ipynb#scrollTo=1RC5HvDwEiuJ
i noticed that you used a zoom_level=19 when creating 256px tiles, what is the relation between the zoom_level and size of the tile? if i want to create 512px tiles, what zoom_level is to be chosen?
what zoom_level was chosen to create the 1024px test images?
The zoom_levels at 256x256 correspond to these meters/pixel at the equator: https://wiki.openstreetmap.org/wiki/Zoom_levels
tile_size=256, it’s ~0.3m/pixel. If you increase
tile_size to 512x512 at the same zoom, then m/pixel would be halved to ~0.15m/pixel because the area covered by each tile polygon at each zoom level remains the same.
In that tutorial, rio-tiler uses rasterio’s WarpedVRT under the hood to resample raster images from their native resolutions and projections to these standardized web map tiles but that’s not the only way to create training chips as mentioned in my earlier post.
Hi Dave @daveluo_gfdrr, Hello all,
I opened Accra 665946 scene using ERDAS and it shows that pixel size is 0.02 meters while in your official webpage description you mentioned that the resolution for this scene is 3 cm:
Using gdalinfo, I found the following:
Pixel Size = (0.020015187071028,-0.020015187071028).
Can you please confirm which one of those three (ERDAS, GDAL or your webpage) is accurate?
Hi @aghandour, thanks for looking into it. Yes you’re right, that scene should be listed as 2cm resolution. We’ll correct that on the site.
here is the full list of train_taier_1 images resolutions (i hope it will be usfull for someone):
Resolutions of test images are removed, as i can see.
I think there was a question about how to create training image & label chips with windowed reads at native resolution. Here is some new starter code I added to the bottom of the notebook “A quick intro to accessing Open Cities AI Challenge data…” to show how you can use rasterio’s
rasterize() and geopandas to do that:
To what extent is the degree of the labeling error in tier1 data?
I marked the boundaries in orange , is this alignment error normal ?
Yes, that’s within the expected range of alignment error for tier1 data. Tier 1 doesn’t necessarily mean the labels are perfect, but that they’re relatively much higher quality and more immediately useful for model training than tier 2. We try to provide as high quality labels as possible to the extent of what participatory mapping teams have done per city/region but there will always be some label noise. This is to be expected in real-world applications so solutions should be prepared to work with it.
@daveluo_gfdrr thank you a lot.
Hi @daveluo_gfdrr, Hi all,
I am wondering how to transform the lat/long coordinates of a specific building polygon into pixel coordinates relative to the top left corner of a tiled window.
For the Driven data Semantic Segmentation challenge, code using Keras segmentation-models library of - GitHub (https://github.com/qubvel/segmentation_models)
Credits - johnowhitaker for fantastic starter code help