Python __init__.py vs sys.path.append/insert -
i know there are ton of how-to import python modules not in path, have yet come across using python's __init.py__ vs sys.path.insert. method better? there obvious drawbacks either, performance? 1 more "pythonic?"
one scenario can think of, have program users download , put in whatever directory, don't know absolute path (unless programatically). folder structure is
working dir __init__.py foo.py src/ my_utils.py __init__.py
i don't see difference between using __init__.py or changing sys.path. there scenario can think of make difference?
part 2 of question is, why have anything able import modules subdirectories? i'm new python, maybe i'm not understanding why fiddling path or creating init files boilerplate. me, seems unnecessary complication. if have "dir" in current working directory , "import dir.my_utils," don't see why have list want able import in __init__.py.
apologies if duplicate, did search before posting.
edit: here's useful link: automatically call common initialization code without creating __init__.py file
_init_.py used python interpreter treat directories packages. packages play important role in avoiding namespace collisions. if read section 6.4 packages python modules, helps prevent directories common name hiding other valid modules occur later in search path.
hence, package mechanism simplifies task of importing packages. using init.py can package.subpackage import *
difficult or tedious if keep appending sys.path
(in fact have append possible modules).
as answer second part question - why need treat directories packages - well, there needs some way tell python should allowed import , should not be. also, not need append sys.path explicitly in case have imported required modules @ beginning , modules require imported present on pythonpath environment variable.
hopefully answer sheds light on query.
Comments
Post a Comment