I can't be certain, but I would almost anticipate that any approach using the ws_concat (or writing a custom aggregate to deal with it), would actually prove to be more resource intensive than the resultset with duplication per row.
So yeah, I mean you should be able to pull the results in a single returned property (so you can get John, Gerry, Mellisa for examples), but I'm not sure if it would be faster than pulling up the duplicates. Just thinking of the sizes, if I had a description and a developer name both 20chars in length for a field specification, then if I where to pull three entries I'd end up with (assuming Char and not a varchar type) 40x3 bytes for 120bytes total give or take. On the other hand, grouping them would result in at least 20 + 20x3, or 80bytes. I don't know what the ws_concat does in regards to this; I would assume that it would take the maximum size of the field specified and multiplies it by the known number of results. Although it will waste more cycles, so. . .
Maybe I should pose the question to my oracle datalord tomorrow to see what he says.