I have ported the GCC (v4.5.3) to a new target (32-bit RISC processor). So far everything went fine. I wrote my own little C-Lib with basic input output and tested it worked. Until today I never actually tried optimization passes (maybe that was the mistake that lead to this)
Anyway:
During porting and building Newlib I ran into an error that I tracked down to the following code:
unsigned char hexdig[256];
static void htinit ( unsigned char *h , unsigned char *s , int inc)
{
int i, j;
for(i = 0; (j = s[i]) !=0; i++)
h[j] = i + inc;
}
void
hexdig_init ()
{
htinit(hexdig, (unsigned char *) "**********", 0x10);
htinit(hexdig, (unsigned char *) "abcdef", 0x10 + 10);
htinit(hexdig, (unsigned char *) "ABCDEF", 0x10 + 10);
}
Compiling this code without optimization works like a charm, however compiling it with -O2 leads to the following error:
test1.c: In function 'hexdig_init':
test1.c:11:1: internal compiler error: in gen_lowpart_general, at rtlhooks.c:59
Tried with:
eco32-gcc -O2 -S test1.c
My question in short is:
Could this be a targe- backend-error or just some configuration I have missed while building GCC?
Any pointers into the right direction are welcome.