Joshua Bronson
2009-01-14 22:17:58 UTC
from http://jjinux.blogspot.com/2008/09/python-debugging-memory-leaks.html:
# This program leaks memory rather quickly. Removing the charset
# parameter fixes it.
connection = MySQLdb.connect(user='user', passwd='password',
host='localhost', db='development',
charset='utf8')
try:
cursor = connection.cursor()
cursor.execute('select * from mytable where false')
sys.stdout.write('.')
sys.stdout.flush()
finally:
connection.close()
...and in the comments:
mike bayer <http://www.blogger.com/profile/01417862951114999907> said...
I think its important to note that the memory leak here goes away if the
mysql_collate='utf8_bin' to our Tables when we set up our model, and our
tables come out with a charset of utf8. Would tacking a
"?charset=utf8&use_unicode=0" on to the end of "melkjug.db.url =
mysql://melkjug:***@localhost/melkjug" just to be safe hurt anything?
Josh
--
Archive: http://www.openplans.org/projects/melkjug/lists/melkjug-development-list/archive/2009/01/1231971481202
To unsubscribe send an email with subject "unsubscribe" to melkjug-dev-***@public.gmane.org Please contact melkjug-dev-manager-ZwoEplunGu1pszqg2B6Wd0B+***@public.gmane.org for questions.
# This program leaks memory rather quickly. Removing the charset
# parameter fixes it.
import MySQLdb
import sysconnection = MySQLdb.connect(user='user', passwd='password',
host='localhost', db='development',
charset='utf8')
try:
cursor = connection.cursor()
cursor.execute('select * from mytable where false')
sys.stdout.write('.')
sys.stdout.flush()
finally:
connection.close()
...and in the comments:
mike bayer <http://www.blogger.com/profile/01417862951114999907> said...
I think its important to note that the memory leak here goes away if the
use_unicode=0 flag is set. This also causes MySQLdb to return plain
bytestrings instead of Python unicode objects, but a SQL abstraction layer
such as SQLAlchemy handles the conversion of bytestring to unicode object in
a more finely-controllable way. So the very common setting of
charset=utf8&use_unicode=0 in conjunction with an abstraction layer which
handles the unicode conversion is the way to go.
I am not sure whether we are affected by this. We are passingbytestrings instead of Python unicode objects, but a SQL abstraction layer
such as SQLAlchemy handles the conversion of bytestring to unicode object in
a more finely-controllable way. So the very common setting of
charset=utf8&use_unicode=0 in conjunction with an abstraction layer which
handles the unicode conversion is the way to go.
mysql_collate='utf8_bin' to our Tables when we set up our model, and our
tables come out with a charset of utf8. Would tacking a
"?charset=utf8&use_unicode=0" on to the end of "melkjug.db.url =
mysql://melkjug:***@localhost/melkjug" just to be safe hurt anything?
Josh
--
Archive: http://www.openplans.org/projects/melkjug/lists/melkjug-development-list/archive/2009/01/1231971481202
To unsubscribe send an email with subject "unsubscribe" to melkjug-dev-***@public.gmane.org Please contact melkjug-dev-manager-ZwoEplunGu1pszqg2B6Wd0B+***@public.gmane.org for questions.