Coords/edge detect 2 colour image
@rb, are the rectangles exactly "squared" or can they be in any position in the image? Are the rectangles solid colour and the background another colour, or is the background (say) white with (say) black rectangular outlines? Do you know what the colours are or can they be anything?
cvp last edited by
@mikael long since we haven't seen you here, welcome back
@cvp, thanks! Have been hacking more on the laptop lately.
rb last edited by rb
So I don’t have an image it’s just in my head but initially yes I’m thinking rectangles black on white or white on black.They could be arbitrary sizes I’m thinking - I just want use them to create edges that I can repurpose as coordinates for building other elements with.
Vision iOS framework looks perfect actually so yes please would be interested in how to implement such a thing in Pythonista for sure.
rb last edited by rb
I found another post regarding text recognition and iOS vision and recognise rectangle is mentioned there so cut out the text aspect and adapted it: (thanks jonb)
But I can’t get what I need from this - I have tried altering the VNDetectRectanglesRequest properties.
Basic issue is even though it’s a very simple image = seems to struggle identifying accurately the edges.
from objc_util import * import ui ui_image=ui.Image.named('test:Gray21') ui_image.show() load_framework('Vision') VNDetectRectanglesRequest = ObjCClass('VNDetectRectanglesRequest') VNImageRequestHandler = ObjCClass('VNImageRequestHandler') handler = VNImageRequestHandler.alloc().initWithData_options_(ui_image.to_png(), None).autorelease() req = VNDetectRectanglesRequest.alloc().init().autorelease() req.maximumObservations=0 req.minimumSize=0.01 req.minimumAspectRatio=0.0 req.maximumAspectRatio=1.0 req.quadratureTolerance=10 success = handler.performRequests_error_([req], None) with ui.ImageContext(*tuple(ui_image.size) ) as ctx: ui_image.draw() for result in req.results(): cgpts=[result.bottomLeft(), result.topLeft(), result.topRight(), result.bottomRight(), result.bottomLeft()] verts = [(p.x*ui_image.size.w, (1-p.y)*ui_image.size.h) for p in cgpts] pth = ui.Path() pth.move_to(*verts) for p in verts[1:]: pth.line_to(*p) ui.set_color('red') pth.stroke() x,y = verts w,h =(verts-x), (verts-y) marked_img = ctx.get_image() marked_img.show()
@rb not so bad with
Hmm ok so your saying 2 colour works better - I only used that test image as I couldn’t work out how to upload an image here…
this is more like the kind of images I want to use though, ie long thin horizontal rectangles mostly - seems to struggle, try :
It creates extra rectangles at top and bottom bounds- how can I remove these?
@rb try with this, it works
with ui.ImageContext(400,300) as ctx: pth = ui.Path.rect(0,0,400,300) ui.set_color('black') pth.fill() pthr = ui.Path.rect(100,100,200,50) ui.set_color('white') pthr.fill() ui_image = ctx.get_image() #ui_image=ui.Image.named('iow:drag_256')
I couldn’t work out how to upload an image here…
import pyimgur,photos,clipboard,os,console i=photos.pick_image() if i: print(i.format) format = 'gif' if (i.format == 'GIF') else 'jpg' i.save('img.'+format) clipboard.set(pyimgur.Imgur("303d632d723a549").upload_image('img.'+format, title="Uploaded-Image").link) console.hud_alert("link copied!") os.remove('img.'+format)
With pyImgur from here
JonB last edited by
Had a rectangle recognizer.
You might play with the quadratureTolerance -- you are only allowing rectangles with angles within 10 degrees -- you might increase that to the default of 30 to allow perspective.
Try adjusting minimumConfidence -- lower will allow more detections, at lower quality.
Cheers everyone, Jon that’s where I nabbed that snippet from in first place! The issue is the proximity to the edge of the image I think, mine were very close to the edge so I tried a bigger black border and I did try messing with the numbers a bit on properties.Seems to work better but still not perfect.