C has no "far" pointers, it only has pointers.
Some (generally ancient) compilers, notably for DOS, did offer near, far, and huge pointers. This was due to how DOS managed memory.
Basically, in DOS, memory is broken up into "segments", which are a maximum of 64K in size. By default, most programs were limited to one segment for code, one for data - meaning you could have a maximum of 64K of code, a maximum of 64K for data. Not all that great, ya know?
Well, there's a couple ways around this. One of them is to use "far" pointers when allocating memory (and, in turn, use a special allocation function). By doing this, any given allocated block can be in a different data segment. Like this:
void far *ptr1;
void far *ptr2;
ptr1 = farmalloc(somesize);
ptr2 = farmalloc(somesize);
Here, ptr1 and ptr2 can go into entirely different segments. Each can still only cope with 64K of data, max, but between them you now have up to 128K of data - 64K each. This is a distinct improvement upon a strict 64K total limit.
So why not make all pointers far by default? Efficiency.
In the X86 world of DOS, when accessing memory, you use both a segment and an offset to specify the memory you want to access. In assembler, it looks something like this:
mov ax, [DS:1037]
This means "read memory at the segment specified by DS, at offset 1037, and store the results in the ax register."
In general, this is simplified a little, as (by default), all memory access is done in the segment specified by DS - so wriiting it out is a waste. What you'd normally see in actual code would be more like:
mov ax, [1037]
It means exactly the same thing - in the segment indicated by DS, go to offset 1037, read the value there, store it in the ax register.
And what about far?
If you recall, when we allocated our two far pointers, the whole point to it was that each could live in its own segment, thus giving us more effective usable memory - but at a cost. Assume ptr1 uses segment 204 and ptr2 uses segment 319. Assume we want to load whatever value is in memory at offset 100 in each segment. Our code - in assembly - might look something like this:
mov ax, ds ; save the current segment value
push ax
mov ax, 204
mov ds, ax ; load segment for ptr1
mov bx, 100 ; load offset
mov cx, [bx] ; get value at offset 100
mov ax, 319
mov ds,ax ; load segment for ptr1
mov bx, 100
mov dx,[bx] ; get value af offset 100
pop ax
mov ds, ax ; restore the segment value
Now cx has the value from ptr1+100, dx has the value from ptr2+100
Contrast that to the default "near" pointers:
char *ptr1 = malloc(somesize);
char *ptr2 = malloc(somesize);
val1 = ptr1[100];
val2 = ptr2[100];
In assembly:
mov bx, offset val1
add bx, 100
mov cx,[bx]
mov bx, offset val2
add bx,100
mov cx,[bx]
Because "near" pointers all share the same segment, there's no need to diddle about trying to load and save the segment registers; you just get the address of the buffer, add the offset and you're done. Less code, faster operation.
Fortunately, unless you're dealing with DOS, or some other segmented architecture with ridiculously small segments that forces you into this sort of thing, you simply need not worry about it.