Welcome!
This is the community forum for my apps Pythonista and Editorial.
For individual support questions, you can also send an email. If you have a very short question or just want to say hello — I'm @olemoritz on Twitter.
Webpage Slices are Different from what is There
-
@TomD could you post your code here?
-
@TomD If you download with something like
data = requests.get(url).content
data is bytes and when you print it, you convert it to string as b'xxx'
-
#The "with" statement overflows into the next line due to this narrow comment box
import urllib.request
tda=str
with urllib.request.urlopen("https://www.asx.com.au/asx/statistics/todayAnns.do") as response
tda=response.read()#Print the entire html string so I know what is in it
#The output of this print statement starts:
#b'\r\n\r\n\r\n<!DOCTYPEprint (tda)
#Separately print the first 5 characters in the html string
#The output of this is, including spaces between items:
#b'\r' b'\n' b'\r' b'\n' b'\r'print (tda[0:1],tda[1:2],tda[2:3],tda[3:4],tda[4:5])
#Print the first 5 characters in the string
#The output of this is:
#b'\r\n\r\n\r'print(tda[0:5])
-
CVP, so no printed string slice takes the html one character at a time. It combines them into groups and adds apostrophes.
-
@TomD You can see the string is between b' '
And characters with \ are not printable: ex: \n = next line
Thus b'\n' is only one character "next line " -
Thanks CVP. That has me onto something.
I am data scraping. Maybe better off using a package like beautifulsoup? -
@TomD try this
st = tda.decode('utf8') print(st)
And you will see that there are empty lines at begin, which are \n
-
It doesn't like
print (st) -
@TomD try this script
import urllib.request with urllib.request.urlopen("https://www.asx.com.au/asx/statistics/todayAnns.do") as response: tda=response.read() st = tda.decode('utf8') print(st)
-
I see so I could work on that utf8 more easily
-
@TomD st contains a string, thus yes, good luck
-
I much appreciate. You have helped me around an obstacle
-
@TomD, definitely recommend using BeautifulSoup or webview with Javascript. Latter especially if you are trying to scrape pages with dynamic content.