Positioning has been a driving factor in the development of ubiquitous computing applications throughout the past two decades. Numerous devices and techniques have been developed -- few of them are actually used commercially. The precision is limited to specific applications, the availability limited to the provider of a specific service. Occasionally, two methods have been combined to recalibrate each other. Most recently, proposals have been made to combine hybrid positioning data from different technological sources, in order to obtain a higher probability for a certain position scan by principles of data fusion. With the penetration of everyday objects with pervasive devices to the cheapest level, advances in visual tracking and recognition, the arrival of biometric devices in every office, and wireless sensor networks of a variety of categories, a new quality of interworking position and context aware systems becomes available. The massive redundancy of such nodes and the synergetic heterogeneity of their recognition principles allows to tailor the perceived positioning probability to the specific requirements of the target application, and a self-learning and self-healing approach to misleading, wrong and outdated pieces of information.