Rabu, 19 Oktober 2011

Menginstal Finance::Bank::ID::Mandiri untuk mengunduh transaksi Mandiri

Atas permintaan user, berikut ini panduan cara menggunakan modul Perl Finance::Bank::ID::Mandiri untuk mengunduh transaksi rekening bank Mandiri Anda. Panduan ini berasumsi Anda menggunakan Linux (Debian atau Ubuntu) dengan Perl 5.10 ke atas. Jika Anda menggunakan Windows, atau Linux dengan Perl di bawah 5.10 (mis: CentOS 5.x) harap menyesuaikan sendiri (atau, jika ada yang mau membuatkan tutorialnya, silakan hubungi saya).

Modul serupa untuk BCA Finance::Bank::ID::BCA juga tersedia, cara menggunakannya mirip.


  1. Komputer dengan koneksi Internet bersistem operasi Linux (Debian/Ubuntu) dan Perl 5.10 ke atas
  2. Program curl (untuk mendownload cpanminus)
  3. Akses root (aplikasi bisa juga diinstal tanpa akses root, tapi agar mudahnya kita pakai root)
  4. Rekening bank Mandiri dengan akses internet banking aktif (ada username dan password).


  1. Instal modul-modul Perl yang dibutuhkan. Agar mudahnya, kita menggunakan cpanminus untuk menginstal modul-modul Perl. Jika Anda belum menginstal cpanminus, silakan instal dulu sbb:

    Buka konsol, lalu ketik:

    $ curl -L http://cpanmin.us | perl - --sudo App::cpanminus

    Setelah itu, kita menginstal modul Mandiri dengan cpanminus:

    $ sudo cpanm -n Finance::Bank::ID::Mandiri

  2. Mengkonfigurasi program. Setelah selesai, Anda akan mendapatkan perintah download-mandiri. Konfigurasi perintah ini dengan membuat file konfigurasi:

    $ mkdir ~/.app
    $ (buat/edit file download-mandiri.conf)

    Isi file konfigurasi adalah sbb:

    username = (username akun Mandiri Anda)
    password = (password akun Mandiri Anda)

    Setelah itu tinggal jalankan perintah download-mandiri dari konsol. Defaultnya program akan mengunduh transaksi dalam format YAML selama sebulan terakhir. Bisa juga dioutput format JSON dan kustomisasi tanggal. Tambahkan opsi --debug jika ingin melihat pesan debugging. Anda juga bisa menjalankan skrip ini lewat cron agar berjalan otomatis secara periodik (mis: seminggu sekali atau sehari sekali).

Jika mengalami masalah, silakan reply posting blog ini.

Rabu, 27 Juli 2011

App::UniqFiles (a case for building app with Dist::Zilla and Sub::Spec)

When watching videos at Tudou or Youku, both Chinese YouTube-like video sites, you'll often get one/two 15- or 30-second video ads at the beginning. Since I download lots of videos recently, my Opera browser cache contains a bunch of these video ads files, each usually ranging from around 500k to a little over 1MB. But there are also duplicates.

I thought I'd collect these ads, for learning Chinese, but I don't want the duplicates, only one file per different ad. The result: App::UniqFiles, which contains a command-line script called uniq-files. Now all I need to do is just type mkdir .save; mv `uniq-files *` .save/ and delete the duplicate videos, which are files not moved to .save/.

With the help of Dist::Zilla, Sub::Spec::CmdLine, Pod::Weaver::Plugin::SubSpec, and Log::Any::App, I managed to finish App::UniqFiles, from scribbling down the concept to uploading the first release to CPAN and github, in just about under an hour (00:54 to be exact). Not super-speedy for a small script (I can probably write a one-off script version in 15-30 minutes), but for an extra 30 minutes, I get:

  • a proper Perl distribution, with tests and POD and all;
  • all the core functionality contained in subroutines (which is much more reusable than a script);
  • a POD API documentation for the subroutines;
  • a command-line application with --help message, argument parsing, configurable log levels, even bash completion with just 3 lines of code.

I think developing with Dist::Zilla and Sub::Spec is great, mainly because they realize the DRY ("Don't Repeat Yourself") principle and free you from mundane tasks. Having to repeat the same stuffs or do mindless/tedious tasks is indeed a significant source of frustation for programmers. It deflects us from the real, important task: writing the code to actually solve our problems.

Dist::Zilla allows you to generate dist's README from the main module's POD instead of you having to create this file manually. It inserts LICENSE, AUTHORS, VERSION sections to your POD instead of you having to insert and update them manually. It frees you from the mundane tasks like creating dist tarballs, checking ChangeLog, incrementing version numbers, uploading to CPAN, etc. Really, I wouldn't want to build dists manually ever again without tools like Dist::Zilla.

Sub::Spec allows you to specify rich metadata for your sub in one place, from which you can generate Getopt::Long options, POD documentation, command-line --help message, etc from it, instead of you having to maintain each of them manually. Module like Sub::Spec::CmdLine also frees you from many mundane UI issues (which, coincidentally, I hate) like parsing arguments and formatting output data to screen.

Senin, 25 Juli 2011

Undocumented Getopt::Long::Configure feature

Getopt::Long has a Configure() function to let you customize its parsing behaviour, e.g. whether or not to be case-sensitive, whether or not unknown options are passed unmodified or generate an error, etc. However, this customization is global: it affects every piece of code using Getopt::Long.

Since I use Getopt::Long in a utility module, which might conflict with the module user using Getopt::Long along with my module, I need to localize my Configure() effect. I was about to submit an RT wishlist ticket pertaining to this, but some quick checking revealed that Configure() already has this feature.

Configure() returns an arrayref containing all the current options. If you pass this arrayref to it, it will set all the options. This way, you can save and restore options.

Thanks to the Getopt::Long author, Johan Vromans, who apparently has maintained this module since 1990!

Kamis, 16 Juni 2011

Using Org format to document code

My most recent hacktivity includes preparing Org::Export::Pod and Org::Export::Text (both not yet ready) following Org::Export::HTML. I am planning to document source code (currently just for functions) using Org as the master format instead of POD. From Org, I'll be exporting to various target formats, including POD itself, inserted to modules' source code in the build process using a simple Dist::Zilla plugin.

Now why Org? First and foremost, obviously because I use Emacs, and the last few months I've migrated practically all of my notes/todolists/addressbooks to this format. Also, it's visually nicer to look at than POD when it comes to things like headings and lists. Org also supports tables (I understand that there's an extension to POD that supports tables too, but I imagine it will not be as easy to write?). BTW, among other lightweight markup languages, Markdown Extra also supports tables with an equally nice syntax.

A couple of concerns for Org. First, writing literal examples is a bit more cumbersome. Where in Pod or Markdown or most Wiki format you only need to indent to go verbatim, in Org you need to enclose with #+BEGIN_SRC ... #+END_SRC or prepend each line with ": ". But I've come to accept it.

Second is parser support in other languages. Since I envision ultimately my function specs is to be processed by other languages too, it would be nice if there are support for the document parser in these languages, including for Javascript and PHP. In this regard, Markdown seems to be a win.

But hey, Org is still readable as-is, and currently nothing beats Org-mode for writing notes. So Org FTW!

Rabu, 30 Maret 2011

Bench: a simpler benchmark module

There was a post in blogs.perl.org or Planet Perl Iron Man (sorry, forgot the exact article) that said something along the line of: "Benchmark is a fine module, but for simplicity I'll use the time command". Which immediately hit home with me, because I too very seldomly use Benchmark. I guess the problem is I almost always have to perldoc it before using it, and there are quite some extra characters to type.

So last weekend I wrote Bench (repo) that's hopefully simpler enough to get used more.

To benchmark your program, just type: perl -MBench yourscript.pl. Sample output:

$ perl -Ilib -MBench -MMoose -e1

Bench exports a single function, bench(), by default. To time a single sub, use: perl -MBench -e'bench sub { ... }'. By default it will call your sub at most 100 times or 1 second. Here's a sample output:

100 calls (12120/s), 0.0083s (0.0825ms/call)

To benchmark several subs: perl -MBench -e'bench {a=>sub{...}, b=>sub{...}}' or perl -MBench -e'bench [sub{...}, sub{...}]'. Sample output:

a: 100 calls (12120/s), 0.0083s (0.0825ms/call)
b: 100 calls (5357/s), 0.0187s (0.187ms/call)

Bench will automatically use Dumbbench if it's already loaded, e.g.: perl -MDumbbench -MBench -e'...'. Or you can force Bench to use Dumbbench: perl -MBench -e'bench sub { ... }, {dumbbench=>1}'.

That's about it currently.

Kamis, 17 Maret 2011


If you're like me, over the years you'll have had your todo lists scattered over multiple programs and places. First a simple text file with homebrewn format, then various Windows programs, then various Linux GUI programs, then back to Notepad and joe/gedit/kate, then various apps on cellphones, then pencil & paper (due to cellphones keep getting lost/stolen), then some cloud apps, then todo.txt, then finally org-mode. And if you're anything like me or many others, you'll find that org-mode is *it*.

I'm now in the (long, boring) process of consolidating everything in Org. For todo lists, contact lists, and even long documents and all journals/diaries. I've written a preliminary version of Org::Parser to help automate stuffs via the command line. It only supports the basic stuffs at the moment but has been able to parse all my *.org files.

The code is available on GitHub.

Senin, 14 Februari 2011

Backup data dengan dengan Perl, rsync, dan git

Saat ini, saya menyimpan data pribadi di 2 direktori utama: ~/repos dan ~/media. Semua file-file teks (termasuk source code, website, catatan/tulisan, konfigurasi, agenda .org Emacs) ditaruh di bawah ~/repos di dalam repo-repo git, per proyek (Contoh: ada ~/repos/settings, ~/repos/writings, ~/repos/perl-Git-Bunch, dsb). Semua file lain yang berupa file media besar-besar ditaruh di ~/media.

Untuk membackup data di ~/media, saya menggunakan File::RsyBak, yang menyediakan skrip command-line rsybak. Skrip ini pada dasarnya hanyalah wrapper untuk perintah rsync dan membuat snapshot-snapshot backup sesuai jangka waktu histori yang diinginkan (defaultnya: 7 harian + 4 mingguan + 3 bulanan). Skrip ini dijalankan tiap hari lewat cron dan backupnya disimpan di harddisk terpisah /backup.

Untuk membackup data di ~/repos, saya menggunakan Git::Bunch, yang menyediakan skrip command-line gitbunch. Pada dasarnya, gitbunch membackup menggunakan rsync juga, tapi tanpa histori (karena git sudah menyimpan sejarah perubahan). Selain itu, yang dibackup juga hanya subdirektori .git/ dari tiap repo. Ini mengirit ruang disk, karena ~/repos masih sering saya kopi ke flashdisk yang kapasitasnya terbatas. Untuk merestore dari backup, kita tinggal melakukan "git checkout" dari hasil backup .git/ tiap repo ini.

Skrip gitbunch juga dapat melakukan sinkronisasi dari satu direktori ~/repos ke direktori ~/repos lainnya. Pada intinya, "gitbunch sync" hanyalah wrapper untuk "git pull". Dengan cara ini, saya bisa mensinkronkan pekerjaan PC ke netbook atau sebaliknya dengan mudah.

Artikel yang lebih mendetil, pernah ditulis untuk majalah InfoLINUX: Manajemen data pribadi dengan git.

Bagaimana strategi backup Anda?

Senin, 07 Februari 2011

The coming bloated Perl apps?

A few weeks ago, I got annoyed by the fact that one of our command line applications was getting slower and slower to start up (the delay was getting more and more noticable), so I thought I'd do some refactoring, e.g. split large files into smaller ones and delay loading modules until necessary.

Sure enough, one of the main causes of the slow start up was preloading too many modules. Over the years I had been blindly sticking 'use' statements into our kitchen sink $Proj::Utils module, which was used by almost all scripts in the project. Loading $Proj::Utils alone pulled in over 60k lines from around 150 files!

After I split things up, it became clearer which modules are particularly heavy. This one stood out:

% time perl -MFile::ChangeNotify -e1
real 0m0.972s

% perl -MDevel::EndStats -e1
# Total number of module files loaded: 129
# Total number of modules lines loaded: 46385

So almost 130 files and a total of 45k+ lines just from loading File::ChangeNotify alone. 130 files just for a filesystem monitoring routine! Who would've thought that a filesystem monitor needs so many lines of program? Compare with, say, a recent HTTP client:

% perl -MHTTP::Tiny -e1
# Total number of module files loaded: 18
# Total number of modules lines loaded: 6089

I quickly switched to Linux::Inotify2 and things are much better now (but I might have to revisit this since we want to give the new Debian/kFreeBSD a Squeeze).

As I suspected (since the module is written by Dave Rolsky also), File::ChangeNotify utilizes Moose, which is not particularly lightweight either:

% time perl -MMoose -e1
real 0m0.712s

% perl -MDevel::EndStats -MMoose -e1
# Total number of module files loaded: 100
# Total number of modules lines loaded: 35760

Compare with:

% time perl -MMouse -e1
real 0m0.089s

% perl -MDevel::EndStats -MMouse -e1
# Total number of module files loaded: 20
# Total number of modules lines loaded: 6675

Come to think of it, running Dist::Zilla is also quite painfully slow these days. Just running "dzil foo" pulled in around 60k lines and took 1.7s! Of course, dzil is Moose-based.

While it is a good thing that Moose is getting more popular, it's a bit shameful to see that Ruby and Python scripts "get OO for free" while Moose scripts have to endure a 0.7s startup penalty. Mouse, Moo, Role::Basic come to the rescue but I wonder what would Ruby/Python programmers think (you have how many object systems?? Why do you people can never agree on one thing and TIMTOWTDI everything?)

Disclaimer: Number of lines includes all blanks/comment/POD/DATA/etc from all files loaded in %INC, actual SLOC is probably significantly less. Timing is done on a puny HP Mininote netbook (Atom N450 1.66GHz) which I'm currently stuck with in the past few weeks. With all due respects to all authors of modules mentioned. They all write fantastic, working code.