Bart Lateur wrote: > > A bigger difference is those accented characters you're talking about. These > > are *not* part of the standard ASCII set, and have codes between 128 and > > 255. I know of 4 platforms: Mac, PC DOS (OEM), PC Windows (ANSI), Unix > > (probably ANSI as well). Each has it's own "standard". Well, Returning the result of the search in html is not a problem. I maintain my city list in filemaker then I export a text file out of it, then convert it to html so the conversion does not have to be on the fly every time the cgi returns a page. The cgi searches the html version of the db, plugs the matching values in a format and write it out. I know how to convert data with perl but my problem is specifically the conversion of the user's request to html, so Macperl is able to match it to one of the entries in my db. The question was more how can I make sure the submitted request is going to be converted properly to html wheteher it has been posted from Netscape/Mac, or Netscape/Windoze or InternetExplorer, or whatever platform/software combination possible? For instance when I use Netscape 2.0 to post my request, an 'e-acute' is sent as '%E9' or directly as an 'e-acute' depending on the Document encoding option chosen (Menu: Options->Document encoding). Microsoft InternetExplorer (Mac Version) uses another value... I have not tried the Win95 version, but I'd guess it is another value too... If each browser uses a different way to encode accented chars in forms, it is a real nightmare (can't wait for Unicode ;) Do you think the cgi should detect what 'user-agent' is used to post the request and then apply a different conversion table for each? > > where the .... 's are replaced by a list of 128 characters, the translation > > table. If anyone's interested, I can post my tables for DOS>MAC and > > ANSI>MAC. I'd definitely appreciate seeing those tables. Thanks for your help, Stˇphane. ___S__t__e__p__h__a__n__e______J__o__s__e___ jose.stephane@uqam.ca http://www.er.uqam.ca/nobel/m206474/