Ok guys, here is the plan!
I. Requirements:
1) Simple HTML parser engine which can retrieve files from SD, render simple formatting TAGs, and follow links
2) Apache Local Server.
3) Experience in PHP. SQL, and XML
4) Web crawler such as HTTrack
II. Implementation:
1) Download and extract
http://download.wikimedia.org/enwiki/lates...rticles.xml.bz2 (1.8GB html version without images. Images would add another 79GB!)
2) Write a PHP script to process data XML files (better if you convert XML to MySQL)
3) The encyclopedia's front page is just a list of the alphabetic. Each letter links to another two-letters alphabetic index. For example, letter P on the front page links to a page with PA, PB, PC, PD, etc.
4) Each index on second page links to a dynamically generated page which lists all articles that begins with the chosen index. For example, clicking on PC on the second page will list all topics for articles beginning with PC letters.
5) Clicking a topic on the third page opens the actual article.
6) Run HTTrack and let it create an offline copy of the local wikipedia.
The local copy is the one that will be browsed on GP2X. It is easy for this copy to occupy 4GB SD card (we are talking about text only!)
Instead of storing the page as normal HTML files, one may import SQLite with compressor module into GP2X. That would save about 50% of the space needed for storage.
Parkydr posted on Dec 10 2006 at 01:39 AM said:
The one you refer to is Wikipedia 1.0 Project which is founded by SOS Children. The version that fits on a single CD is Wikipedia 2006, which consists of 2000 handy-chosen articles.
Wikipedia 2007 is 900MB and consists of about 4000 articles.
My plan is to get all Wkikpeida articles, which are 1.8GB "compressed".