Sunday, December 21, 2014
Mathematical examples
This is very simple. There's one counterexample to the diagonal argument, the points of the line drawn from zero to one.
Correspondingly, the density of the rationals in the reals puts their intervals 1-1 with those of other sets dense in the reals (as they are also so dense).
These plain number-theoretic facts are neatly derivative of first principles, and the conscientious mathematician is now interested in how that is.
Wednesday, March 28, 2012
Euclidean Geometry from First Principles
From alt.philosophy's 'Multidimensional Bostik' : https://groups.google.com/group/alt.philosophy/msg/676cd934b9826ed5/
That makes a lot of sense.
That makes a lot of sense.
Friday, February 24, 2012
Here follows a plan for parts of design of a process model: the idea here is to encapsulate the work to be done into portable units.
Then for example the map-reduce task is to be sandboxed.
Basically these are composable pieces called baseunits. The baseunit type, it is like a runnable, there is a factory of runnables for its state machine or automaton. They are composable, add them together or compose them as products of various operations. Then, dynamically, they are compiled to the implementation factories. Then the algorithms get data and run.
Facilities are along the lines of I/O and data access, although that should be under. Basically the point is to be able to pass input to the system and it processes the input (source code, data).
Then, how is then this project accumulator to increment? Basically every programmatic component is to have a display widget and to fit into widgetry. They are all to be composable with simple web based drag and drop tools. These go with the debugging instrumentation, also throughout the system, where the systems generally are to be interactive and reprogrammable throughout. Basically they have natural graphical representations, then according to various plans those go into graphical data processing, as well in interactivity toward representing both the forward and reverse mapping.
Then, the reports and so on, all should be buildable, with reading and writing XML with schemas. Schemas are generally used to map among schemas, and very simple schemas are used throughout as composition/inheritance/transform description and graph mapping to isomorphism of subgraph.
schemas
selectors
objects
data and layout
algorithms
scheduling
Query and selection language, also product generation language
combinatorial enumeration - when there are possible combinations, then in small ranges, it can be computed their bounds and allocated efficiently their setup, in medium and small-medium ranges, then partials on the smalls can be maintained in specialization of steps of the enumerations in traversals, where brute combinatorial enumeration would no longer be so useful
completion in paths and in subgraph mapping to alignment - basically along the lines of schema mapping and for example delegation throughout, the combinatorial enumeration can regularly and basically in time compute for coding general reductions, also for general pattern matching to check for general mis- and close-matches - with completions over insertions/deletions , for example in algorithms step-throughs in range matching and the ragged on boundary alignment - completion in paths is basically increasing the graph width and looking for matching on all partial evaluated subgraphs, toward for example error-correcting coding
statistical progression in ubiquitous statistics throughout - with time and progression registration throughout, there is generally available for each random variable (r.v.) to have its value as a sample enqueued, and generally according to sampling it is then worked into how algorithms use the samples of which random variables to compute their and other sample statistics.
generate the product, then generate the queries for the ranges, then working selectors and coloring - this is an example of enumeration of the product space before computing any part of the data, along the lines of the plan for the accesses through the data according to their cost as resources, then using that to advise the selectors and their reference machinery over the objects how to organize the layout and the algorithm of and on the data.
building coloring and permutations into the variable type, working that through out to the graphical - basically this has that each datum accorded maintenance as a stored type begins to maintain its own for use in multiple containers with shared contents
Point here is to compose the object, the base object basically has a primitive graphical representation that is driven by its composition, just as there are generally reflective mechanisms over its composition.
Then, graphically, fiting and matching, extracting rules, expecting rules, the graphical is generally having then natural layout according to design rules for example with symmetry and fill. Then in ubiquitous representations like the objects panel that it draws, objects can be drawn on its readouts.
statistics: basyesian, non-, centralized and non-centralized distributions, centralized and the normal, non-centralized and digital counting frameworks
basically working up then, a class of distributions, that auto-fit to various data samples
and to get the data samples generally in short forms, into the well-distributed statistics
for that, there is generally the timestamped and function-associated ordering data (preserving sort keys, or, exporting sort keys in unsorted data)
the variable's values themselves in the static node processor model, are statistics, these functions of the measurements are to be all preserved to have then reliable bias estimators as possible
work up the table to the forms of those with easily computed or tractable values, in generally numerics
numerics:
(structures and relations)
virtual machine / primitive:
natural integers
floating point and imaginary numbers
terse/normal:
fixed point (and imaginary numbers)
variable/extensible precision (and extended precision)
rational approximation and precision maintenance
counting frameworks:
digital and bank array
working shuffle
Now, it is very reasonable to consider performance aspects of the nodes. The nodes generally are virtual timeslices of general purpose (multi-) processors. It is totally reasonable to use all the registers of this node in its timeslice, in fact in maintaining that the virtual environment might hard-dedicate the vector resources of the processor to the node, using the vari-parallel of the vector arrays of the nodes (for example in the multimedia extensions, or in node architecture), there are basically defined algorithms to move the data in forms among:
character oriented
code page mapping, I/O mapping
word oriented
half word and doubling
scale and fill
natural and all integers
(register oriented, float)
vector oriented
vector step bank processing
Then, the algorithms are designed to make use of the data on the vectors, and the algorithms move the data onto and off of the vector registers for refinement.
For the product of combinations of these types of data and how their natural/intrinsic and synthetic operations run across them (amortized over execution scheduling), for each of the product, there is to be enumerated the cases of the transitions of the values in the vari-parallel among the various alignments of representations of the data that are co-processed.
For example in a batch of 100 records, they might be co-processed. Until there's an error record the ordering is immaterial for that operation's guarantee of completion. Similarly for each of the integer scalar elements of the vector array, in one processor step they are processed together. Then, the result of their computation is to be checked.
Here, there is a general consideration of rate-limiting among scalar and vector processing components. Also a general notion to arrange micro scheduling coordination on rating / rate-setting.
What is to be the composable element, is that in terms of the data, for the 100 records, they are to have their selectors read out and their critical variables packed onto the vector register to be step evaluating in banking. Then, the bank of records completion and the locks on those 100 records are marked, although maybe the lock frees could be spewed to the range aggregation for the range completion, contingent on the success routine. (Have to rewind error record extraction = stack enough to go back that far.)
These will naturally combine in various shuffling networks to very efficient processing over time. For example, client-server and cacheing can be specialized to run out through the process template a compilation of a run (eg test snapshot). So the composable units of the program definition are very heavyweight indeed.
Basically then the idea is to work out the guarantee over time of process completion, then to look out for worst-case data sets and so on, in maintaining statistics on the records variously, and on error records generally. Yet, then that might get in the way of general rate and flow.
What to work into general evaluation? It only makes sense to work the data up this way when the algorithms have enough schedule to run them for the data. Still, generally the case for parallel evaluation intra-node is strong. It is cache-local data, and, generally the resources are not even already used and should be to take advantage of cache and so on intra-node.
The cost of register transfers is high, then a consideration is as to distance and measure of rate there. One notion is to exercise the general transfer cases.
1: move scalar elements out to vector element and aggregate (reducing, constant over width)
2. move scalar elements out to vector element and return value or range (reducing, asymmetric/asymptotic)
3. move scalar elements out to vector element, process elements vector-wise / in-place, return (constant in-vector step speedup)
4. move scalar elements out to vector element, align with I/O, emit serially
5. read elements from I/O (specialized), from memory/cache, ..., register transfer (specialized)
As well, algorithm components are freely designed in custom logic of a sort.
What are the algorithms that would make use of the various organizations of the data and so on? Basically general purpose routines should be worked up to each of the forms, and as well they should interchange, so where the logic isn't synthesized they should build from those composers (comp-els).
For example, in computing a product while the values are in order in the array: semi-word product widths, use of small integer frameworks for bucket bounds computations in vector parallel?
For example: mapping a change in the ordering selector (i/d/t, insert/delete/transpose, shift) into the vector elements: shuffle, mark into and out of edge case.
The general serial algorithm and operation on the vari-parallel.
The algorithms are defined as scanners. They go over a pass of the data. That is serial. There is also a selector based framework. (Selectors are pointer/reference tree descriptions that compile to moves). (The algorithms should also be runnable on default library types.)
Then the scanner has built ahead the state machine of the language interpreter. If it's not a scanner then maybe it's a step rule. Scanner is serial, step rule is wide. Scanner can start in the character or word (semi-word, scratch), working up expected bounds for worth to the vari-parallel (in cache realignment and vector bank address).
So, an I/O stream, might be read first character for character, where the algorithm is on a string type? Depends on algorithm. Algorithms should be defined in formal algorithm types. Forward on string, or string as sub-component: the composition methods of the string are maintained/preserved, generate functional signature maintenance/preservation. So, how to define the algorithm to automatically work data up into the vari-parallel as a general case? Basically is about alignment and word boundaries and array boundaries. Now, when a cache is filled with this selector, instead of the object to call its extractor/getter, the cache rate is really rapid, why even cache this data? Should work up on the data what it takes about the selector composition and the range to naturally aggregate up to caches else to otherwise put the cache on flush only.
So basically where the consideration is alignment, the general v-p promoter/derogator from scalar to parallel placement of the data, and organization of the algorithm, then the edge case of the scalar placement is the beginning and end. For natural (virtual) multi-dimensional arrays, the edge cases can work to corner cases. For the sequence, serial algorithm, the edge cases are start and end. Where there values naturally align or fit the edge cases may be trivial but it might be worth the establishment of the processing phase that there is the transition for the algorithm for realignments under the vector register (on serial algorithms on vector register contents with progressive algorithms), as well in the stitching and merging of the algorithms over the data, sometimes they would combine with natural alignment, othertimes might require shifting or gearing/transmission in ending an algorithm and beginning the next on the same or a different comp-el's selector point range.
Then, serial I/O is over the available dedeicated resources and also as abstract outputs to the various event lines where update can send events. There is a separation of I/O errors and logic errors (necessary specialization of errors = requires error specialization framework).
vector operations on register model:
vector-wise: scalar arithmetic
across: spigot algorithms, min/max, area-averaging?
How to get derived products off register?
single channel data on vector register
multiple channel data on vector register, paste products on selector reverse (high speed, asynch, ofload with object recomposition scheduler)
Basically the vectors are small 4/16, working out where the operation boundaries are, that are along the lines of where the scalars can be spigoted / blitted onto the vectors.
Here there is a general consideration of the cases of over and under flow in integer routines and general bit-wise algorithms that define progressions of vector banks of integer (and generally numeric) bits. It is very useful to maintain counts of flow in the semi-word, under the word. This can be used to rapidly estimate products that are the bounds of the object, in only shifting the bit its offset. Basically before and/or after atomic arithmetic option on the integer, its range is computed. This might be the new range or it could be the products that would extend it to the next step in the range.
How about then the spigot algorithms, and screw-driven algorithms?
spigot algorithm: put in more data, refined comes out. Have as many pipeline steps as there are register progressions. Then it is step-wise forward across the vector.
screw-driven algorithm: rotates and refines
How about his, how about the mask over the vector contents, so the register or value selector goes over the vector content in the semi-word and evaluates each serially for forward scan?
For the serial and bit mask, this should be lower: where the high bit is and its offset generally, using that for counting, in that it maintains bounds of having a separate bit for each count i.e. maintaining its alphabet in range.
Now, how to make it then, that the forward processing of the data, gets back correctly. For example, the I/O framework gets a success result and starts streaming into the vector onloading. The algorithms on that data (and their contingencies) are set up to be executed on the corpus of the selector (or to compose or integrate the selector), this is generally a speculative framework so it is to be expected that all data will get to a near compiled representation. Yet, that depends on the data and volume. If the compositor- compositional- computational- element "comp-el" is to read into the vector and run in that manner, it should run off expectations and availability of resources, where that then demands the presentation of the register and processing resources as variables in the system.
Then, the refinements, sometimes they're sample driven, other time continuous (aggregate, off min/max), delivering samples.
Then, for the space algorithms and the vector, that should work out naturally too with the selectors.
The selectors basically define that they accumulate before they are executed and co-plan (work in constants). Selectors basically define addressing reference though algorithm-replete presentation of trees and graphs of data into bounded sequences (and also about the geometric).
Register contents and time-sharing environments: checksums of values in store
Defining the processing resources in the system as variables
There's a general consideration to work co-scheduling to measure availability and response, although that is not optimal systolically (because it presumes long durations between executions unless they are to be chained in execution about whether they chain and continue execution with time allotment). So the scheduler should be for a recurring event, and with a before/after time and a range (or "at"). Still there is to be figured out: what are the resources, and measuring them.
Timestamp framework: high-resolution counters
read character / range: work off multipliers of algorithm space descriptions, what it costs and how much progress occurs in the forward step
count iterator progression
characterize wait profile of routine
How is something like massive c-s array using parallel? For example in batch completion networks, with marking elements of an array done, and mapping back from the selector to the objects and notifying them.
Another notion is that maybe the general purpose logic is simple, and then to build up the staistical framework in the vector.
variables of operation:
time to completion (composed of times to completion)
invocations, invocations / time, each
how many resources it touches, pass counts
the size of inputs/outputs
Then, how to set up the standard libraries, and how to instrument loops, for these?
Basically could use a convention in Java. Java is leaning toward having not a TypeGetter but instead of having object API, to define then on some TypeGetter class, that is specialized to the type, that can be specialized to String objects etcetera.
Java no intrinsics anyways - still would want library interface, but then how to instrument Java loops without ugly labels everywhere on each of the loops? Basicaly is a consideration that in C++ could have scope automatics that automatically reset each scope. That's one: how to have for any object, detection each time it comes into scope? Without defining another constructor call there, to be make it implicit.
Maybe then along the lines of javassist or workflow callbacks. still for loops, then make a loop an object and compose them. (How to work up terse definitions close or in source language?)
So, what then are to be the definitions?
Product - of what, for what operations? Sees readout of type combinations
Datum - in transfer or reference, y
Then, these selector arrays for example are to be natural iterators. Then, in Java, how to get to
//<- loop body
void <T over>for_all(Collection<T> of, Action<T, Collection<T>> todo) // <- how to get type through,
for (T each : selected){
}
Then, figure out a relevant for_each, for_all, for_any
for_each
convenience order
runs with selector
for_all
aggregate
for_every
aggregate, return completion
for_any
asynchronous update
(non-blocking on each, no error path or alternate error path)
paths can be queued/discarded
loop bodies need these things just like sequences also, while and do
do, while, ...until, ..., until might be best is out of language, want to avoid "for" also
Then, they all do the same thing in the general case, and in the specialized case, they would collide at compile time if there were upstream errors, though it could change the conventions
Then, basically for the results, it is about the error path or the result contingencies. Also batching, with the notion that the systems will be for example handling the errors of each of a batch, toward their eventual result in an aggregate.
Where the evolutionary aspect comes in is as that various resource modulations over the various sample data run, then there can be the small timescale estimation of the evolutionary selections with then being able to run the first many generations of various resource combinations on small data sets (of large/huge population) to rapidly search for matchings, error bounds, and asymptotes on range matching, these filter out as advice for the runtime (as a runnable configuration).
Then, how to break those to ranges? Basically consideration is along the lines of completion and matching out to duplication and so on, in huge redundant processing system for amortized over scheduling execution, to output graphically and in lines out to work and resources the performance.
OK, that is for the computational framework and the composition of the computation, with having that having the beginning of exposure to transparency in for example the evolutionary framework or weighted graph framework, then there is the notion of the goal finding, the resources, the ongoing experiments, and so on.
"Theoretically, the car produces enough down force to drive upside down." -- http://www.fastcoolcars.com/saleen.htm
Then, the idea is to go about defining the data in schema. Then, to be characterizing that data, it is generally a time series experiment.
How about this I/O over selectors then, in Java, for example these are reference chains in compositions. So, the idea is to have the object return an object, that is actually referenced so many deep by a path spec instead of however many deep in indirection. All the class with compositional inheritance implement this in the convention thus in that manner the selector operates off of the roots. Then for example as this is another method maintained in convention like equals and so on, just has that from the reference selector path, it is the first item in declaration order matching the type. Then, it should also return the path schema, so that when another item comes along and references the schema, then it compiles to that one, the direct path. So there is a general path discovery mechanism and a direct path patch mechanism, to the object reference which can be a scalar or array or for the use of converting the return to that type, here primarily for that: that the path() method given the spec (or when the spec is in scope) returns the object for the selector, whatever the composition of the datatree root that is the compositional element, toward filling arrays with those reference then operating on those. Basically it could be fixed or variable so it's dynamic, because for example the completion results on the returns go back to the object locks. Also then each object would have a convenience lock, or, just a lock to set that any other object can clear, just to see if someone cleared the lock on re-entrant functions (semaphore)? Locksets, etcetera, selectors over locks. Also, for locks, have locks over the type selectors, so that unmodifying trees are free on the reference there too, though then they would combine error condition.
object requirement:
locks, roots, path components through compositional hierarchy
interface and state to monitor
Then, in resolving the compositional and inheritential components of the objects then for example the object could go from field list to associative map as the scale affords economies of scale (demands definition of "economy of scale"). Then, where the object is initially defined that way, then the object interface for this is still to have the accessors, but for example it can't have fields, then it's always an access, unless it has various generic fields when they're available in what is like a variant/union, but instead has room for each of the types, or of some types for example built-ins, or field lists, a "generic" type or prototype, or a custom type: here that is just to preserve field access where for example the object layout is fixed itself in the definition/interchange as it may be for external components (with otherwise maintaining its state rel. the framework and external components). The general notion of organization for presentation is about the synchronization and requirements of update, as to whether for example an interactive system can be cascading (asynchronous) or interlocking.
So, then the object factory, makes for each object, this object accessor. And all the accesses go through it, or example the selector is shared and that automatically makes sense in that that algorithms co-schedule (and co-refine) on the shared range. It is basically a reference counting and reference coloring mechanism. The numbers of counters and colors help define bounds.
How to concisely type to represent the path in the object? It is the schema accessor to the object, but a different object (or implementation) might map it differently, and it's dynamic (toward interoperable and interchangeable object representations). The same selector with that same object in schema identity for another schema could be various levels deep and in space and time, then the objects could work to refine their selector depth and the indirections would load locally relative each other. So, for the object, for the schema, it is the path spec: in field accessor, getter in property convention, and combinations of accesses and references through as well type correctors or type correlators on type sub-schema with algorithm specialization to types. Also would be maintaining constructor state in constructor framework that throws.
For that then generally there is the notion to have the various schema, and to have builders of those.
For example, a record type might be built naturally from a table and its relations, to define a space. This then goes up to the items framework, is that really feasible in Java? At least the arrays will have the products on them to get a readout of the product space easily, automatically variously in normal data forms.
Then, that can drive generators of the generation of the selectors over them.
Then, multiply collections of the objects together, generating their product spaces. Basically it is a plan.
Then the selector spec can be reused because it is off types in the product space, not the rows and columns.
Make this a table type? It is a TableProduct type, of a sort (often a table, tabular segmentation, block matrix, etc).
Then for the selector, have for example an address selector. (Finds link in table to address ID). So the TableProduct is built directly from the data sources and their relational schema (or annotations to schema).
Then adding them together could be appending them, or what, basically is to be worked out the product table of all the relations, working up to cyclical relations.
work in object constant space (identities, parameters)
concatenate/append:
product:
Cartesian (square, n-d, space-filling)
related: (de-/re-)normalizing to object mapping
Then, for the selectors, they start at the root of this tree, and then the result set is a computation of the bounds of the selectors and setting up the scales for the iterator passes over the data (given read/write, insert/delete/modify characteristics of the algorithm).
Because the products and selectors don't actually include the data, it's easy to keep a lot of them in memory. The data is loaded on demand.
Then, for the selector, the various algorithms schedule together on a selector, for when the pass runs over the data. Scheduling is to start and run to completion or to schedule in a time-driven system.
Now, the idea is to be able to build these selectors easily off of the product space. So, need to graphically display the product space and build of of it.
OK, going about that, the n-d product selector.
Each of the table will have its "primitive graphical representation", basically the schema rules are also defined off of the product relations, in terms of where they fit in as products. Then, there is some, primitive graphical display plane. The display plane, is basically to organize.
HTML display plane: table, (workup)
canvas display plane: zoomable dots with legend
swivel display plane: working transitions/transforms for n-d display to plane
working component layout, working back events for interactive display in re-usable and generic representations
then, basically there should be a live display plane, or it should make one.
So, there is some default display plane, all the events go through it. (log console)
Then it is in events. Basically each of the setup operations on the data, products, and selectors should emit events, compared to iterator and scalar operators that variously do (with selectors on events).
So, defining a table might be along the lines of, discovering the table from the data source, or what. Basically a data source will have automatic granularity down to its data accesses. Then for working with a data source, from all tables down to selecting each datum, those are all the TableProducts it exports. Here for example it is assumed that the referential constraints are consistent also, there is a general recursive mechanism that enumerates classes of the TableProducts of the DataSource.
"If you like pivots, you might like TableProducts. If you don't like pivots, you still might like TableProducts."
Then working down from the data source, it should create the local representation and model of the data source as it is enumerated. Now an array or a scalar is automatically a DataSource also, in local references to local TableProducts.
Then, there is the recursive enumeration of the relations of the tables, and variously for the objects, through their composition navigation lines as above. So all the objects are navigable for their objects and relational hierarchy in making from that a tree of those, and balanced trees through each with similar navigational properties. These all then run together, and, where the depth of the levels is not too great, they are dynamic and so on, and generally then it is a flexible selector for cursor model.
Then, for example to have selectors correlate variously related terms in various data sources and their composites, with rotating the components generally about, they are defined to illustrate the space term, that should work well or be generally tractable. (Working into menu bar plugin framework on selector cursor and scroll wheel.)
OK then, working up the data and selector framework, and the interactive interface, and the programmatic environment on algorithms.
Now, consider what can be accomplished with gathering statistics of the data.
About scaling and partitions, one idea for goal seeking is to work those with near-bounds cases out, working up to where the counters can be in partitions, about maintaining the dropout history.
Here basically here is this: writing statistics when expectations are met, or, when they are not met.
So, how can I bootstrap this? Basically can work arrays and declarative relations, like keys. These would be nicely set up with operator overloading, consider how to use enums to run those out to integer arithmetic for the combinations, then run them back, for actually using operator overloading in Java.
Otherwise, to be generating arrays of the selector indices, where they are not so deep for positional.
For array return type, return an object and that goes through some castable interface, ...basically make replete and interchangeable ...,
So, for built in and primitive objects, make some default reflection strategy over them. Look for get/set in convention, name matching and so on in annotation generation. Otherwise, then, how to work those into the algorithms? Basically the algorithms are to the data types of the selected items. These might have general behaviors like acting on the type of object if it exists in variously readable or writeable form. For example how to recursively add them together, that would be good. Work on generally implementing stack-free recursion throughout.
OK then: make a factory that, given an object, returns its composition and inheritance schema over type discovery. (Similarly the DataSource is discoverable.) Then, how to implement the algorithms over them? Basically they describe the types. So, then there is iteration over these trees, with for example:
1) installing literal references for telling objects apart where they might only be equal for identity
this goes into a map for the object reference, copy of map is in object to return that path compilation into the object for the schema
Then, there is to be still the forward and recursive:
1) use reflection and type discovery to enumerate composition and type in hierarchy
2) implement algorithms to work on those types, also to bring them above in the schema according to then their rule set or methods as objects of those types.
for example, might be String type in object, but, really is Component of Address or along those lines, for validation rulesets, methods (aspects)
Then, to generate the iterators, they could be, combinatorially enumerated. For this, for the general sense of, combinatorial enumerations, here it is of the path components and the combinations that tend to be patterns in the path components that return from among the objects. For example, if all of the objects are the same, as they often are in large homogeneous data sets, then it might make sense to compile the path instead of mapping the path. Then, if the pass actually makes a copy of the data (process boundary), it's reasonable to combine that with sorting on the result set, for example opportunistic ordering validation in range block transfer (opportunistic or investing, also greedy, in working out pre-compute bounds to accelerate later step in combining results, eg partial products for later merges).
OK then:
objects
discovery (recursive)
path enumeration
path reduction enumeration
Then, at each of these stages, these are to be simply, over the objects, simply adding them to these containers or declaring them in these lifecycle environments. What about standard containers? It is like built-in primitives for numeric types, how to use the standard containers interchangeably throughout, and to present them to external components (besides just the object types in the model). So for this it is good to have some way to cast it directly to one of those types, to use it then with those types. Here the selector is used on a copy pass to construct the standard library for Java, of the objects though that already exist under the selector instead of only being reference in the container. Obviously for built in types, it would be a copy, or the function might pass it as input also, with wrapping the function to copy into a new one, or to, hashcode each one, and check them for change, for update. (Auxiliary buffer generally contains a hashcode.)
OK, then there is the consideration that: generally the user data types are always defined as interfaces. Then there are constructed reasonable objects to implement them. And methods are implemented in terms of the algorithms and attached to them, often typing the string and other build-in types. Or, a default implementation of an existing user defined base type would have the implementation directed under the user defined interfaces in an annotation or directly as an inner class, with then generally getting the base type from the user defined interface, installing it as a field in the interface. Working that out to strings for operational support (ofString() ...) , this will help , ....
So, here in generics, in Java, and generally, this idea of building up the object hierarchy via interfaces, then being able to declare the default or base types, and then as well to log all their inputs right, helps to maintain user defined types interchangeably with general types, where a lot of the utility then is being able to use standard algorithms on the implementation types and their delegates, this is building up the object type above the user defined types, how without redefining the user defined types? The schema mapping allows the calls to be aligned with other APIs with similar and not same domains, in this manner, the objects are well-maintainable.
OK, feeling good about this system plan, basically there is the consideration that: the user-defined objects are declared in interfaces, that also tags methods besides fields, and getters and setters are for property types, also other categories can run off defaults (POD type eg).The user-defined objects are defined in classes, either default classes or others. Then, where the system wants to instrument aspects, it can wrap the class in an interface preserving its error handling.
Then, the interfaces are reflected and their contents enumerated, to make a tree of the references of the object. Then, these are reference counted for use in collections, and here their reference is variously maintained in the object factories. . Then, for two corpi of data, when they are to union, for example, and the reference paths are different from some types of records than the others to the relevant field, the algorithm is agnostic of it.
Then, how to use the user defined types? Consider for example a service model's types. Now, for each of them, they've had the interface defined for them. Then it would naturally match in schema the service model type (but the actual service model class type, or superinterface, is used with that client-server API, for example. Yet, as for example general services that it calls, various of the parameters would be placed or in use, and so on for the result set. Then how is the result set being bound together in the algorithm from it? Basically the client has a result type, that is among the composed and inherited types, the not derived but transformed types. So, in more reflective exhaustion of the schema, all the introduced types come into the tree. To wrap this facility, then it pretty much demands annotation or exhaustion of the introduced types and whether they're reasonable, maybe just off the existence of the interface.
OK, then for example how to get, strings of an address from an ID in the record, to the library type,
Would have some address type, somehow, it is bound over multiple columns, say, or how the string is encoded - basically am to work out a typing mechanism for all types of strings, basically a tag typing, running then phylogeny of strings. Also for path components, string representations can be optimized.
Also considerations of custom class loader.
So, for strings, and, tagging and typing them, and, still presenting them generally to external interfaces as strings.
Like Address, it is built of Address Components, that fit into a variety of addressing schemes, down to where they are numbers and strings and codes. So, any oneof those can be represented as a string, which demands string representations of all object relations. (From serialization to verbose/rich enumeration.)
Then, this reflects a general notion to make a functionary (functional) level above the object models, toward having re-baseable interface representations. (Eg tieing reference counting also to the interface and working up delegates throughout).
how about this kind of task: extract the rhythm from the music to score it to sounds
That would work off off coverting the digital samples to sounds, and from those to have component extraction, periodic and amplitudinal component extraction over time.
So, for the string recognition program: it is along these lines:
matching to recognizer dictionary
counting and so on
alphabet categorization
language closures / asymptotics
Then, the idea is along the lines of setting up the types, so they are in types, and also having dynamic types, with having the classloader and compiling the types in place, for example. Still, here there is the general definition of the selectors over the product space, this should be graphically driven with automatically saving state of each in the general expressive framework.
OK, then how to be interactive and graphically driven? This shouldn't be too bad. Basically need a graph drawing framework, into the block matrix menu selection framework, so working up cells from drawing primitives. Have for example automatic relations where the graphical layout representations are recorded when they're used with those as comparators and step partitions/boundaries, these should stack right and well and rotate-ably, generally.
Multiple axis rotational quick config parameter control, ....
eg working jmx through that through components....
then for all the algorithms they are surfaced in the component with for example standard containers and scoping those around types and transforms and so on, with then generally emitting interchange specs.
source code recognizer: recognize source code, generate templates (language templates)
OK then, in review, this effort is toward definition of:
object composition, inheritance, and implementation in one model
interchangeable objects with defining object interchange and general wrappers
maps heterogeneous schemas
reduces dependency calculations along schema product generation lines
Then, for the statistics, there are a couple ideas about where to refine and initiate the statistics. Basically model changes initiate statistics.
So, how am I getting this from Java? It is
reflection over the objects
general container and reference semantics throughout
combinatorial enumeration
graphs and path component libraries for selectors
product space definitions (write all out generally)
Allocator frameworks
There is a general consideration for the generic high-efficiency step algorithms that along the line of free lists for sequences, they are areal components so it is reasonable to consider how to rapidly estimate the upper bounds, then build the program to partition the areal resource square.
About square and long sequences, there are considerations how to treat data variously in long and short sequence. For example, data would transition from record to evolving stream.
The selector copy should co-compute the in-place shuffle, to-from, compiling that, because then the efficiency is into vector memory banks (for directly loading the vector registers, including built-in integer registers partitioned on the semi-word). Shuffle is one step (on the vector registers, but could be triangular in g.p. imp.), but merge is pyramidal (worst-case). So, work up good case to at least optimize against worst case, then work out pathologicals to specialized templates (cycle detection, model optimized). Here the range selectors are of use, eg, in following iterators, no need to generate, only maintain, any sort of a sorted range, into the general on that, with objects bounding their components in , the, address allocator integer atomization layer.
The integer atomization is basically to assign to literals various integers thus that number-theoretic operations maintain products of relations over them and then to use the machine integer instructions to generate evaluations of them. For example, to make a bag, assigns apples to 2 and oranges to 3, different prime numbers, then compose the bag by multiplying it by the integer constants for apples and oranges. To count how many apples or oranges, divide or remove until there are no more left. There are hundreds of prime numbers that in balanced quantities fit in hundreds in a 64 bit integer. The multiplication and division of integers act naturally like addition and subtraction from the bag.
This data structure has then that it returns more quickly searches for the items with the least amount in the bag. (It returns presence on one divisibility test.) To return the count of the items, it takes more time with items that have more elements in the bag, although exponential finding reduces bounds time, in counting the number to test for powers of it. When a number reaches a given bounds, the evaluation of the partitions can be put in place, where those balance the range to make it more square. The partitions are put so that adjusting the algorithm from square variously is along expectations of the distributions of the primes used (with additions/removals from the alphabet).
To store a data structure like a sequence of apples and oranges, multiply it by an apple and an orange. How to make it memoryless or analog? There are at least so many of each off of the bounds, which is logarithmically found, in terms of finding the max of how many of both and how many. How to maintain the sequence? Using addition and subtraction both, and testing the modulus each time to always keep it odd. (To keep them co-prime.) For example:
add a 2: multiply by three, x add 2, if divides by three x, reduce and set the high bit
use the separate counter prime, eg, 5, for both of them
separate prime for each step, assigns index to step, indicator for two-member language, separate prime for each step above fixed language range, what about expandable language range?t,
How to compose in Java to the statement level? Operation map? Basically consideration is along these lines, how to compile everything. Basically is dynamically compiling integer operations and tests and then mapping the resulting integers back and forth from alphabet or category maps, to evaluate membership and count various small mathematical data structures in the numeric range, with a general range framework in the semi-word.
Then, consider, for example, where it's useful for storage to have an integer instead of a 10-member 10-possibility multiset (of object references). The standard library data structure, say backed with a hashmap to count, is of the hashcode range but the existence for any possibility is constant and the count of any is 10 constants and the count of all is 10 constants, or less. Copying the multiset is a one-step integer operation, along with the context of the alphabet and its ordering and mapping to the integer representation of the integer that is copied. Now, as the multiset grows for example in count of elements, the standard library hashmap to counts is better, because the integer representations takes as many to get as that. Imagine this, however, the integer rule could just have the next prime mean two of the first member or otherwise define scaling of events (reasonably according to expectations of which there are reasonably none). Consider instead when the number of members goes to a hundred. The integer representation still has a constant membership test while the standard container builds a hash tree. Yet, as the number of members increases, of course, to maintain a useful number of counts of items, would go beyond the smaller integer into the extended precision integer representation.
Obviously then there is a case for the asymptotics that then there isn't a point to do anything except tune the hashcodes to the container content range. Yet, in the mid-range, it demonstrably is, so toward a general scaleable framework, it can be the case that an algorithm optimized for the range could unblock an otherwise asymptotically correct operation.
What operations work well when storing a vector of small integers in a machine integer word? Varying on their organization, here in constant width partitions, there is an idea to use an addition with a carry mask on the arithmetic, to be able to do modulus arithmetic on each, for addition, and then subtraction mebs, or addition then mult. In that way, it is a constant number of steps to evaluate the parallel operation over the word (sub the width of the word). Actually it is linear with the width of the word, but could work fill where generally it is an edge case (border case).
In these cases, the use of all the members in the alphabet has been assumed. For example there are 26 primes a, b, c, .... Using only apples and oranges, for example, that is 2 and the prime at that offset primes(index(o)).
What are other parallel evaluations as above?
1: move scalar elements out to vector element and aggregate (reducing, constant over width)
2. move scalar elements out to vector element and return value or range (reducing, asymmetric/asymptotic)
3. move scalar elements out to vector element, process elements vector-wise / in-place, return (constant in-vector step speedup)
4. move scalar elements out to vector element, align with I/O, emit serially
5. read elements from I/O (specialized), from memory/cache, ..., register transfer (specialized)
Here, there are operations while they are on the vector, and supporting that where:
there are multiple integers / object representations in a machine integer (eg 1)
the object spans words
where it is a word reference, then that is the scalar unit, where should/would it go, more or less (both, balanced, half statistical)
working in units generally in spatial maintenance (unit transference)
Then, work out how there are generally productive algorithms.
Promotion and escalation
promotion is when one of the symbols now needs a bit more than its partition
escalation is when all (or more than one) the symbols now need more bits in their partition
Then, that is still consideration, that this should all be naturally labelled, working up the integer codes.
constants:
natural and particularly digitally natural constants,
graph constants for cyclical detection probability (count down and out all structure)
sequence constants (eg primes as above)
OK, then working the binary naturals, is one thing, then how to go about the effective transition, among the bases, in completing the small products.
Consider for example a flow of control key. Each branch would build over whether its ever taken. Works up bits and escalates automatically. This is switch bank to flow graph. Reduces loops (maintains loop counts).
where to send up the first pass: cycle detection, or toward cycle stack filling, then working that segment back to cycle fault
basically is on the bounds count call, when the bounds count is computed for the bound residues
then, the bounds counts are generally used to reduce things to integer state machines
fitting the integers state machine into the machine word
The general purpose CPU machine word is the register transfer, instructional, and data type of the natural word width of the processor for example 32-128 bits on generally available commodity processors. For various algorithms, the states and their progressions can be concisely represented in a general format that is available and tractable to algorithm generally.
This works up for example the maintenance of multiset contents over items in small space that is portable and easily recoverable, and processable in the parallel, where the object representations are maintained as they are used. Then, the maintenance of the maximum count of items as the bound takes as much time linearly as the maximum count of items, and concentration on one item will exhaust the item but not the max, needing to sample all for the max while it is non-zero, requiring prime factorization to get the max, suggesting maintenance of the max and down. These define under the bounds as items are added or subtracted to freely get the count of all items in the prime alphabet across an integer, which can be extended.
Then basically the population samples (or proportions in full large data processing) advise the size of the trees that are maintained then for the alphabets of relevant size to reduce the object (in space) to three integers: pointer/reference, key, and data (per-object, besides the static object). Obviously the tree has empties or terminals where then the key is used to satisfy the data requirements or the member requirements for the object through the reference. Why/what then key? What if there is only one or few objects, those are then as well to evolve in this manner a concise and organized structural representation.
Generally about parallel evolution and reduction/regression
shuffle/permute (index, re-order, across/up to merge level)
placement, range, characterization, order, sorting
work up natural bounds that satisfy data dependency
co-work periodicities and linear progressions identically along bases with preservation generally
The flowgraph map is used to emit the compilation, the evaluation has the compilation instead of the full compilation, or it is conditional? It is conditional to combine. (Reducing flow graph profile, moving implementations and so on).
OK, then this is feeling more like an environment where it might be able to productively bootstrap a variety of systematic concerns for data sets.
Then, how to set up something useful? Here there is a consideration in the machine learning context, that: according to the container's use, they are optimized in various ways, in implementing dynamic container and access algorithms. Another is how the same algorithms translates to different machines and contexts, for example the Pure Java machine and the assembler machine. For Java, how, to, serially execute integer instructions, composing them generally? Notion is to implement compilation of that.
So, basically a parallel evaluation framework, toward having multiple state machines there then. What happens when they go from base 2 to base 3? Idea is to work out addressing that involves, for example, in the cases of the state machine, the copies/representatives/delegates/primaries (objects) of composition/inheritance/transform (access), to compile those up variously for objects to define structural bounds for accommodations.
one bit to two bits -> base 2 to base 4, storage for 3 and extra
how to work/plan consider carry and saturation? working range boundaries and special elements, any flow control off of sort
basically is the consideration of intermediate products and the paths to products, in terms of productive products
Now, the statistics are to be generated, concisely.
tree evolves <-> statistics evolve
Now, this leads to the multi-ragged very much. The tree is for example for each function, its imprint on each function it calls, and for each function how much it takes. Here the statistics are general, opportunistic, and ubiquitous, or rather vice versa.
ubiquitous:
change to synthetic value <- random variable
change of state <- all relevant variables are r.v.s, many constant
change of derived value <- function of random variable, work to remove bias
Then, general it is good to maintain general estimators, and bases of reference for transforms among components
Working in floating point values, sometimes they share resources with the integer state machines and numerics framework, other times they are used for the general accoutrement according their type. About rotational transformation and so on, that is generally maintained in the angular, in that manner toward having error -free component bases, towards that diffracting integration is minimalized or accounted. Floating point: very useful for neural net bases, with asymptote detection (bounds detection, cyclical detection).
Generally available progressive codes for forward machine representation: here the general notion is to represent the encoding so that, for example in that section of the flowgraph, that the state result is an immediate to the evaluation, reducing code size and cost of state machine maintenance. Useful again with parallelizing coding to other components, because the machines are coded in integers.
OK, that is a fine idea. Yet, what's the point, in scaling that, besides scaling model functionality?
OK then, for a language, is one thing, there is a consideration to store for each variable its range and so on, and also declaring for otherwise integer or string type variables what are their ranges.
Then, for these parallel and flood evolvers and steppers, there are all the natural periodic partitions and boundaries, those should have generators, and then off a few forward steps of that, deductively reconstructed, splicing out a regular boundary case to establish periods. Work those up and down the semi-words. Basically the word is useful, but it has a general escape, so interchange in the semi-word might facilitate word state in the word (or just tag words in the block sub-word).
Now, this example of using the integer tag atomization library has then as part of the object framework, there are to be ways then that the objects are variably baseable, and that runs can be pre-computed so that the selectors and algorithms combine and that the corresponding space-time diagram can be fit to available service, resource, and scheduling concerns.
OK then, back to basics: does this mean rewriting algorithms and data structures in this new form? Partially yes. Largely algorithms are to be rewritten (or ported). Data structures should follow naturally from their sources, and their classes are maintained. What about writing algorithms, and re-writing algorithm? One idea is to write the algorithm in the form, in as to where it is compiled into that algorithm template. Algorithms should be strictly reduced in language terms of isometric or isomorphic algorithms in template, although of course those are generally filled with specializers and to be analyzed, then algorithm instances are generally referenceable.
Then for example an algorithm should be defined as a reduction or along the lines of a product form
min max
sum count
sort
search
library call
The search algorithms are close with the selectors.
average mode
partitions and boundaries, scales and scalar character, these are to be worked up over distributions toward that in terms of distributions, and natural partitions over them, that variously it is extracted linear and log linear components with model fitting that is key
For that then, need to have this numeric model of a distribution, and what it means, for a given distribution, with numeric samples relative that distribution of family of distributions, or relative the distribution and its parameters. So, the distributions are pure distributions, mathematically, how to represent them digitally?
distribution: f(n->R[0,1]) pdf, CDF, MGF
Then, maybe can work moments into generic range partitioning.
Alright then great: some notion of a distribution. Those are discrete distributions. Then there are real-valued distributions of both non-negative and all positive and negative real numbers. Then the idea is that: for the given expected distributions, it is to be determined, in terms of computational space and time, the cost of the development and maintenance of statistical information. There is the general consideration to work up these canonical orderings of named distributions, and to use those to work around the centralization of moments. For example, add a relation, it automatically generates statistics about the related items, and in summary for example generates statistics opportunistically on a pass over the data. Then, it could be simply pass driven, the storage of samples in maaaassive redundancy, and then to bucket everything into statistical refinement while waiting for the schedule pass or run, compared to the local refinement processing (eg also driving change runs). Now, the samples are integer valued, but the parameters are real valued, or fractionally valued, say. That gets into the fractional maintenance, for cases where the extensive storage allows recomposition of exact integer valued results in scale.
Then, it seems critical to define the centralizing and non-centralizing distributions, in terms of that, as a general abstraction, maybe it should be considered to implement the algorithms on them, to define various products and completion groups throughout. In this case it is about separating linear projection and clustering. Basically the neighbors are to work up their neighbors and store as above to work out distances and clusters. Then in merges they would double combine and then could be separated along the current axis perpendicular projecting through the connection of the vector segments of the overall vector. Eh, lots of machinery that in a direct case could be implemented efficiently.
So, how many samples to collect? Idea is: to not weight down processing. If there are a 1000 samples maintained, to re-run the integration on each sample is a 1000x slow-down. Yet, where it is 1x, sometimes it can be pipeline and parallel. Then there's a consideration as to why and when to maintain samples. Basically it is partially a consideration of the cost of the computation of approximative and particularly recoverably approximative numbers. Then there's a consideration of a general statistical refinement, totally of a sort. In that sense each change of value is an event, so it has to be read out twice, as value and statistic, unless the same algorithm carries value and statistic. Then where it's a new sample it's free to carry when it's a refinement it squares (doubles).
Then, what are the statistical data structures?
value, machine integer, f.p., and 2's complement layout
mean, min, max, order statistics (is cheap order statistics in cheap space)
basically extends the selectors and TableProduct products, to get their statistics (for example what is built up in the neighbor trees to evaluate in two steps n^2, squaring steps <-> powering steps)
basically values form the vector of a random variable themselves, but then what are their expectations? here often the expectations help to define the range.
then, there's consideration on working the range, and parameterizations of the range, with distributions, and parameterizations of the distributions.
Then, the min max for example form the same range. And, then the count of samples is as well considered.
Then, in the selectors and trees, various of these might be maintained, for example contingently on other events that are known to have an association to it.
Then, the changes to the variables effect the changes to the statistics, and the changes on the selectors and trees change which statistics are maintained/refined generated, which take arbitrary time and space terms to effect. Basically then this will have general tasks in the assimilative and associative in data and from that build up what statistics are relevant. Also there is a notion that over time, it's possible to select which statistics to maintain in terms of the various bounds adjustments toward that statistics expected to maintain information are preserved.
OK then, that goes well, in generating these kinds of things for the data structures:
for data that is eventually to be sorted, to be generating the order statistics
what this would have, for example, in data that can be tabularized, sorting indices for its columns, in presenting those to selector frameworks that then iterate in that order for compiling iterators so iterators don't lose the object reference which is tagged into the eventual iterator.
for values, to be generating the range statistics
this would have, for example, grouping and coloring of the components
value change:
update statistics (update chain on value)
update range (update range on value)
compute what the change does for dependencies. for example it might be that the range finds it out of its sort bracket or anything else contingent on the value. then that cascades to mark (and maybe compute, and whether it is mark or compute) the related trees that would be invalidated, up to gross invalidation that would recompute..., basically then the algorithms selectors are to work those are up, in general re-use of selectors and iterators over those generally be evaluated
OK then great. Now, it is a computing framework, some of the changes in the values are knowns, in the sense of defined behavior without arbitrary input they're known. For the unknowns, or data, these are relevant as samples of an unknown population. So, there are samples of known and unknown populations, in the sense that, the range of all strings defines a language but not a tractable one. So, each input should initiate a new language, in a sense. Or, strings can be in many languages, how to structurally organize the languages. OK, partially that's off dictionary branches. Simlarly the pointers and references in literals, they're in ranges. For each character it's in a range, for each string it's in and has a range, also the strings can be generally tokenized. This basically has the string type and typing as part of the type transference framework. Then, after the fact, the languages might be defined in the deductive rules and synthetically combined. As well they should freely associate in dictionary fragments (Boolean).
Defining language rules, use regular expressions, with the notion that they export their runtime characteristics, and how much of the string they match or the rate, where for example the literal subsequence matches go into stepping and co-presence in ranges, basically with that many rules are evaluated at once on strings generally to evaluate their contents
about the co-presence in ranges, is about, maintaining population statistics for a range. Again the question arises, when to maintain forever those, and when to attenuate them? Should maintain the co-presence indicator fields that hopefully evaluate in one step there, how it should be is toward optimizing toward the component merges and changes, with still keeping that (basically working up counters to bounds and filling, for tiling).
OK, great, then working up:
ranges
variables
distributions
selectors
then, consider, when an algorithm exits. Then the statistics can run out asynchronously. Then what about when they are still computing and the function is re-entered? Well then the machinery should run to that, the re-entrant function should be to the statistics stepper. Then, as much can be maintain, is, and the function has its statistics generated according to how often its called, more often: less statistics, less often, more statistics. (Also can be per-caller, per-thread, besides per-call).
ok, then for example where the function templates are re-used, then the function instances will be geared toward structural support of the algorithms. That way, for the given data that goes the algorithm, the statistics about it going through that function will be carried with it, besides the selector correlations on the range correlations.
Then, how so forward to proceed? It is the regular question. Basically random variables are naturally defined: which are of interest. How to go about generally journaling their statistics, that is one thing. With time series, it is along these lines: instead of a timestamp with a time, all the objects that are defined in the same logical timestamp get the same logical timestamp. The timestamp is actually used later in time series data. Point is to work them into ranges, and also to quantize them into time ranges. So, to simply collect the samples in a vector with their timestamp or context stamp allows then their general treatment as the samples of the "population" of that r.v. Now, the r.v.'s have separate lfetimes in for example each program invocation. So, it is reasonable to assume they have distinct distributions each program invocation. Obviously with identical inputs they're same. The r.v. has also the lifetime across program invocations. This gets into the re-use, and how also the re-use will be automatically algorithmic and below the level of where the r.v.'s have the lifecycle that varies on the r.v.'s that go into the algorithm, so, the statistics should go with the variable, and any dependent variable _is_ a function of the partial variable (jointly with all components as should be read-out). So, for any function, basically it should have the closure of the variables, so that their pairwise-possible evaluations are read up, and also if they are passed to methods together, and it is upward indicated that they are either mutually interdependent or that another is dependent on them. (Eg, variables might condition result, but variables as literal to result is wrong or rather simply gets into piece-wise composition.)
Another general facility of the distributions is their piece-wise composition. This is basically adjustment of the conditional, and about how the distribution is only meaningful as a product contingent on the other, but as well could be of the same as the main. So, the distributions on each flow of control statement and assignment and expression need to be evaluated, that is why there is the high level definition and that still external algorithms can be used with layout control.
So, each r.v. has a distribution, with parameters for families of distributions (compared to say point-wise, vector parameter compared to smooth). For all its language its of each of those until it censors. That is with the range and the alphabet. Ranges are intrinsic to numerics but apply to alphabets and also for what alphabets are used for in distinct individuals in the population. Then, in terms of what samples to maintain, it's to get efficient estimators of the parameters of those variables, also of those as random variables. For model fitting, there are rough models of each of many known distributions different under various transformations of the scale parameters to fit them variously, then the statistics are maintained and computed according to computing whether the data is likely to match that distribution in for example a scale-invariant manner in co-projection.
Discrete distributions, generally sampling with replacement, but also will be rating and so on in the extraction of periodic components and features
working exhaustion, like waiting for the next of the search result to complete the range, eg passing the better guess on the fill
basically working those on pairs/jointly, eg binomial on matches for successive terms and also for matching along equality and similarity lines, working indicators and Bernoulli
Then, working on the discrete probabilities, it is about efficiently building the joint, particularly while it is serial eg in in-place routine use of vector registers or local scaling capability
Why work with joint probability distributions as having the square range instead of them maintaining their co-components joint each other? Basically computing the joint PDF from the population, it has the sampling effects in the result. Those might be bounds instead of constants. When evaluating the j.p.d.f., why, is it more productive to preserve the other components, in terms of where: the point of having joint p.d.f.'s is either there is correlation (eg piece-wise, that would indicate or be indicated by the language transform generally), or, when they are off partitions, the sampling ranges.
Then, for these various distributions, the idea is to estimate what would be the distributions off of the samples. Then, use that to get a function, and take that function and invert it. Here the idea is that then advise partitions off of the CDF in terms of the uniform layout of the resources. For example, the CDF of a uniform distribution would have that, the partitioning would match the expected maximum. Then the next sample is readily bracketed in distributions with means for rejection.
The pdf is the probability that, there is the event
The inverse is the collection of class with value that one probability, that's not so much what is wanted, but what is the area or range around it.
For example of 1/6 is 1,6 2,5 3,4 4,3 5,2 6,1 for 7 in 2d6, 1/6 part. the 1st order statistic of that.
For example, using the multinomial, to estimate the probability of given items in a sample, or to get back an ordering statistics down over the maxima and figuring out how to sort the maxima.
About algorithm and cascade, there are sometimes blocking and non-blocking, and also ones that make sense to send speculatively. For example a time-scheduled step might be completed ahead of time, send it with the outgoing batch for the next bucket at the midnight window, in the next if it's forward it could go up, so the notion is to advise the algorithm: yes advise this event in case it might be read ahead of time, else drop it, then whether it could be sent or read it could.
If there is a joint PDF then the marginals can be read off from summing over the various combinations of the others.
Then, there are the constituent pieces of the counting frameworks, idea is to maintain those that are efficient in the arithmetic that connects the scalar and block vector. For example, for a field record, it has its counts. For those in a row, it satisfies the range summaries. So it's worthwhile to maintain directly the collected range counts in the range instead of the record, for example re-evaluating the record for it specifically but maintaining the count summaries across the records in the range. In that sense as cacheing the value on changes to the fields, it makes sense for the summaries to be abstracted into the ranges. Then for example if the range is evaluated and rejected on the summary, the item is not evaluated. Then the summary can have statistics that represent drop-outs (that are codes beyond the language space that indicate in the formula the requirement to re-evaluate the objects in the range to get a summary) where here a drop-out is basically a composition of the abstraction above, on the distributions, then as part of the language and scanner and parser machinery.
a) useful, in terms of how many bits they'd take to represent or that in forms their use is part of efficiency
b) space-filling, in terms of that for computing them to be ready, the expected gain exists to use them
c) generally dynamic, in terms of that the algorithms are requirements bound, and adaptive throughout
Hmm, thinking here on using the vector registers on the commodity hw, should use "just enough" or "all". Here the idea is to answer the question of when to attenuate sampling? and it's when in terms of the local map of node resources or actually the map of node resources where the task is distributable, to then including all costs to make use of all the paid-for resources. Basically make use of the voluminous resources at least in cleaning up, then for the re-entrant functions they are automatically defining their attenuation bounds, with that the operations profiles conditions the statistical profile, and about in re-use of functions how they are cloned. Yet, it is not just space resources of the routine also the cache/locale.
Also for the register scheduling, the idea is that use of the smaller registers has expansions that fill the larger registers on the algorithm, with cascading algorithms into and out of the abstract register space. The abstract register space or ARS is a convenient representation, the abstract register space is a root component in the processing along resource lines. On virtual machines, might only be to line or expression on breaks, still is about definition of behavior along general resource lines. This requires maintenance of units as natural weights in the framework. The register is basically a storage location for the result of an atomic operation. It's intrinsic and machine operation is implemented itself in the general purpose sense of arithmetical logic units (ALUs) for integer operation and here in the commodity: vector register banks that are generally not being used by other applications in the resource time slices, or rather, in this case they are: about whether to double the registers in packing (along merge and insert/delete), or sampling (storage of sample data).
An abstraction of storage is then of the sample, for example, it could just be a clone or whatever exists as an in-state copy. The storage could be a pointer to the object. Then the object stores itself. Then it might select along wakeup a code that has for its storage for example its own overall static serialization method for a read-only read-out. Or it could component the parcel and simply in the composition framework read it out, if, for example, it only stored what was ever read from it.
In this manner along the lines of being able to rapidly and purposably recompose product line recompositions, general repurposability is along: that inspection and sampling in dynamic lines establish composition and treatment (eg worksheets and fill-ins).
Then for example the map-reduce task is to be sandboxed.
Basically these are composable pieces called baseunits. The baseunit type, it is like a runnable, there is a factory of runnables for its state machine or automaton. They are composable, add them together or compose them as products of various operations. Then, dynamically, they are compiled to the implementation factories. Then the algorithms get data and run.
Facilities are along the lines of I/O and data access, although that should be under. Basically the point is to be able to pass input to the system and it processes the input (source code, data).
Then, how is then this project accumulator to increment? Basically every programmatic component is to have a display widget and to fit into widgetry. They are all to be composable with simple web based drag and drop tools. These go with the debugging instrumentation, also throughout the system, where the systems generally are to be interactive and reprogrammable throughout. Basically they have natural graphical representations, then according to various plans those go into graphical data processing, as well in interactivity toward representing both the forward and reverse mapping.
Then, the reports and so on, all should be buildable, with reading and writing XML with schemas. Schemas are generally used to map among schemas, and very simple schemas are used throughout as composition/inheritance/transform description and graph mapping to isomorphism of subgraph.
schemas
selectors
objects
data and layout
algorithms
scheduling
Query and selection language, also product generation language
combinatorial enumeration - when there are possible combinations, then in small ranges, it can be computed their bounds and allocated efficiently their setup, in medium and small-medium ranges, then partials on the smalls can be maintained in specialization of steps of the enumerations in traversals, where brute combinatorial enumeration would no longer be so useful
completion in paths and in subgraph mapping to alignment - basically along the lines of schema mapping and for example delegation throughout, the combinatorial enumeration can regularly and basically in time compute for coding general reductions, also for general pattern matching to check for general mis- and close-matches - with completions over insertions/deletions , for example in algorithms step-throughs in range matching and the ragged on boundary alignment - completion in paths is basically increasing the graph width and looking for matching on all partial evaluated subgraphs, toward for example error-correcting coding
statistical progression in ubiquitous statistics throughout - with time and progression registration throughout, there is generally available for each random variable (r.v.) to have its value as a sample enqueued, and generally according to sampling it is then worked into how algorithms use the samples of which random variables to compute their and other sample statistics.
generate the product, then generate the queries for the ranges, then working selectors and coloring - this is an example of enumeration of the product space before computing any part of the data, along the lines of the plan for the accesses through the data according to their cost as resources, then using that to advise the selectors and their reference machinery over the objects how to organize the layout and the algorithm of and on the data.
building coloring and permutations into the variable type, working that through out to the graphical - basically this has that each datum accorded maintenance as a stored type begins to maintain its own for use in multiple containers with shared contents
Point here is to compose the object, the base object basically has a primitive graphical representation that is driven by its composition, just as there are generally reflective mechanisms over its composition.
Then, graphically, fiting and matching, extracting rules, expecting rules, the graphical is generally having then natural layout according to design rules for example with symmetry and fill. Then in ubiquitous representations like the objects panel that it draws, objects can be drawn on its readouts.
statistics: basyesian, non-, centralized and non-centralized distributions, centralized and the normal, non-centralized and digital counting frameworks
basically working up then, a class of distributions, that auto-fit to various data samples
and to get the data samples generally in short forms, into the well-distributed statistics
for that, there is generally the timestamped and function-associated ordering data (preserving sort keys, or, exporting sort keys in unsorted data)
the variable's values themselves in the static node processor model, are statistics, these functions of the measurements are to be all preserved to have then reliable bias estimators as possible
work up the table to the forms of those with easily computed or tractable values, in generally numerics
numerics:
(structures and relations)
virtual machine / primitive:
natural integers
floating point and imaginary numbers
terse/normal:
fixed point (and imaginary numbers)
variable/extensible precision (and extended precision)
rational approximation and precision maintenance
counting frameworks:
digital and bank array
working shuffle
Now, it is very reasonable to consider performance aspects of the nodes. The nodes generally are virtual timeslices of general purpose (multi-) processors. It is totally reasonable to use all the registers of this node in its timeslice, in fact in maintaining that the virtual environment might hard-dedicate the vector resources of the processor to the node, using the vari-parallel of the vector arrays of the nodes (for example in the multimedia extensions, or in node architecture), there are basically defined algorithms to move the data in forms among:
character oriented
code page mapping, I/O mapping
word oriented
half word and doubling
scale and fill
natural and all integers
(register oriented, float)
vector oriented
vector step bank processing
Then, the algorithms are designed to make use of the data on the vectors, and the algorithms move the data onto and off of the vector registers for refinement.
For the product of combinations of these types of data and how their natural/intrinsic and synthetic operations run across them (amortized over execution scheduling), for each of the product, there is to be enumerated the cases of the transitions of the values in the vari-parallel among the various alignments of representations of the data that are co-processed.
For example in a batch of 100 records, they might be co-processed. Until there's an error record the ordering is immaterial for that operation's guarantee of completion. Similarly for each of the integer scalar elements of the vector array, in one processor step they are processed together. Then, the result of their computation is to be checked.
Here, there is a general consideration of rate-limiting among scalar and vector processing components. Also a general notion to arrange micro scheduling coordination on rating / rate-setting.
What is to be the composable element, is that in terms of the data, for the 100 records, they are to have their selectors read out and their critical variables packed onto the vector register to be step evaluating in banking. Then, the bank of records completion and the locks on those 100 records are marked, although maybe the lock frees could be spewed to the range aggregation for the range completion, contingent on the success routine. (Have to rewind error record extraction = stack enough to go back that far.)
These will naturally combine in various shuffling networks to very efficient processing over time. For example, client-server and cacheing can be specialized to run out through the process template a compilation of a run (eg test snapshot). So the composable units of the program definition are very heavyweight indeed.
Basically then the idea is to work out the guarantee over time of process completion, then to look out for worst-case data sets and so on, in maintaining statistics on the records variously, and on error records generally. Yet, then that might get in the way of general rate and flow.
What to work into general evaluation? It only makes sense to work the data up this way when the algorithms have enough schedule to run them for the data. Still, generally the case for parallel evaluation intra-node is strong. It is cache-local data, and, generally the resources are not even already used and should be to take advantage of cache and so on intra-node.
The cost of register transfers is high, then a consideration is as to distance and measure of rate there. One notion is to exercise the general transfer cases.
1: move scalar elements out to vector element and aggregate (reducing, constant over width)
2. move scalar elements out to vector element and return value or range (reducing, asymmetric/asymptotic)
3. move scalar elements out to vector element, process elements vector-wise / in-place, return (constant in-vector step speedup)
4. move scalar elements out to vector element, align with I/O, emit serially
5. read elements from I/O (specialized), from memory/cache, ..., register transfer (specialized)
As well, algorithm components are freely designed in custom logic of a sort.
What are the algorithms that would make use of the various organizations of the data and so on? Basically general purpose routines should be worked up to each of the forms, and as well they should interchange, so where the logic isn't synthesized they should build from those composers (comp-els).
For example, in computing a product while the values are in order in the array: semi-word product widths, use of small integer frameworks for bucket bounds computations in vector parallel?
For example: mapping a change in the ordering selector (i/d/t, insert/delete/transpose, shift) into the vector elements: shuffle, mark into and out of edge case.
The general serial algorithm and operation on the vari-parallel.
The algorithms are defined as scanners. They go over a pass of the data. That is serial. There is also a selector based framework. (Selectors are pointer/reference tree descriptions that compile to moves). (The algorithms should also be runnable on default library types.)
Then the scanner has built ahead the state machine of the language interpreter. If it's not a scanner then maybe it's a step rule. Scanner is serial, step rule is wide. Scanner can start in the character or word (semi-word, scratch), working up expected bounds for worth to the vari-parallel (in cache realignment and vector bank address).
So, an I/O stream, might be read first character for character, where the algorithm is on a string type? Depends on algorithm. Algorithms should be defined in formal algorithm types. Forward on string, or string as sub-component: the composition methods of the string are maintained/preserved, generate functional signature maintenance/preservation. So, how to define the algorithm to automatically work data up into the vari-parallel as a general case? Basically is about alignment and word boundaries and array boundaries. Now, when a cache is filled with this selector, instead of the object to call its extractor/getter, the cache rate is really rapid, why even cache this data? Should work up on the data what it takes about the selector composition and the range to naturally aggregate up to caches else to otherwise put the cache on flush only.
So basically where the consideration is alignment, the general v-p promoter/derogator from scalar to parallel placement of the data, and organization of the algorithm, then the edge case of the scalar placement is the beginning and end. For natural (virtual) multi-dimensional arrays, the edge cases can work to corner cases. For the sequence, serial algorithm, the edge cases are start and end. Where there values naturally align or fit the edge cases may be trivial but it might be worth the establishment of the processing phase that there is the transition for the algorithm for realignments under the vector register (on serial algorithms on vector register contents with progressive algorithms), as well in the stitching and merging of the algorithms over the data, sometimes they would combine with natural alignment, othertimes might require shifting or gearing/transmission in ending an algorithm and beginning the next on the same or a different comp-el's selector point range.
Then, serial I/O is over the available dedeicated resources and also as abstract outputs to the various event lines where update can send events. There is a separation of I/O errors and logic errors (necessary specialization of errors = requires error specialization framework).
vector operations on register model:
vector-wise: scalar arithmetic
across: spigot algorithms, min/max, area-averaging?
How to get derived products off register?
single channel data on vector register
multiple channel data on vector register, paste products on selector reverse (high speed, asynch, ofload with object recomposition scheduler)
Basically the vectors are small 4/16, working out where the operation boundaries are, that are along the lines of where the scalars can be spigoted / blitted onto the vectors.
Here there is a general consideration of the cases of over and under flow in integer routines and general bit-wise algorithms that define progressions of vector banks of integer (and generally numeric) bits. It is very useful to maintain counts of flow in the semi-word, under the word. This can be used to rapidly estimate products that are the bounds of the object, in only shifting the bit its offset. Basically before and/or after atomic arithmetic option on the integer, its range is computed. This might be the new range or it could be the products that would extend it to the next step in the range.
How about then the spigot algorithms, and screw-driven algorithms?
spigot algorithm: put in more data, refined comes out. Have as many pipeline steps as there are register progressions. Then it is step-wise forward across the vector.
screw-driven algorithm: rotates and refines
How about his, how about the mask over the vector contents, so the register or value selector goes over the vector content in the semi-word and evaluates each serially for forward scan?
For the serial and bit mask, this should be lower: where the high bit is and its offset generally, using that for counting, in that it maintains bounds of having a separate bit for each count i.e. maintaining its alphabet in range.
Now, how to make it then, that the forward processing of the data, gets back correctly. For example, the I/O framework gets a success result and starts streaming into the vector onloading. The algorithms on that data (and their contingencies) are set up to be executed on the corpus of the selector (or to compose or integrate the selector), this is generally a speculative framework so it is to be expected that all data will get to a near compiled representation. Yet, that depends on the data and volume. If the compositor- compositional- computational- element "comp-el" is to read into the vector and run in that manner, it should run off expectations and availability of resources, where that then demands the presentation of the register and processing resources as variables in the system.
Then, the refinements, sometimes they're sample driven, other time continuous (aggregate, off min/max), delivering samples.
Then, for the space algorithms and the vector, that should work out naturally too with the selectors.
The selectors basically define that they accumulate before they are executed and co-plan (work in constants). Selectors basically define addressing reference though algorithm-replete presentation of trees and graphs of data into bounded sequences (and also about the geometric).
Register contents and time-sharing environments: checksums of values in store
Defining the processing resources in the system as variables
There's a general consideration to work co-scheduling to measure availability and response, although that is not optimal systolically (because it presumes long durations between executions unless they are to be chained in execution about whether they chain and continue execution with time allotment). So the scheduler should be for a recurring event, and with a before/after time and a range (or "at"). Still there is to be figured out: what are the resources, and measuring them.
Timestamp framework: high-resolution counters
read character / range: work off multipliers of algorithm space descriptions, what it costs and how much progress occurs in the forward step
count iterator progression
characterize wait profile of routine
How is something like massive c-s array using parallel? For example in batch completion networks, with marking elements of an array done, and mapping back from the selector to the objects and notifying them.
Another notion is that maybe the general purpose logic is simple, and then to build up the staistical framework in the vector.
variables of operation:
time to completion (composed of times to completion)
invocations, invocations / time, each
how many resources it touches, pass counts
the size of inputs/outputs
Then, how to set up the standard libraries, and how to instrument loops, for these?
Basically could use a convention in Java. Java is leaning toward having not a TypeGetter but instead of having object API, to define then on some TypeGetter class, that is specialized to the type, that can be specialized to String objects etcetera.
Java no intrinsics anyways - still would want library interface, but then how to instrument Java loops without ugly labels everywhere on each of the loops? Basicaly is a consideration that in C++ could have scope automatics that automatically reset each scope. That's one: how to have for any object, detection each time it comes into scope? Without defining another constructor call there, to be make it implicit.
Maybe then along the lines of javassist or workflow callbacks. still for loops, then make a loop an object and compose them. (How to work up terse definitions close or in source language?)
So, what then are to be the definitions?
Product - of what, for what operations? Sees readout of type combinations
Datum - in transfer or reference, y
Then, these selector arrays for example are to be natural iterators. Then, in Java, how to get to
//<- loop body
void <T over>for_all(Collection<T> of, Action<T, Collection<T>> todo) // <- how to get type through,
for (T each : selected){
}
Then, figure out a relevant for_each, for_all, for_any
for_each
convenience order
runs with selector
for_all
aggregate
for_every
aggregate, return completion
for_any
asynchronous update
(non-blocking on each, no error path or alternate error path)
paths can be queued/discarded
loop bodies need these things just like sequences also, while and do
do, while, ...until, ..., until might be best is out of language, want to avoid "for" also
Then, they all do the same thing in the general case, and in the specialized case, they would collide at compile time if there were upstream errors, though it could change the conventions
Then, basically for the results, it is about the error path or the result contingencies. Also batching, with the notion that the systems will be for example handling the errors of each of a batch, toward their eventual result in an aggregate.
Where the evolutionary aspect comes in is as that various resource modulations over the various sample data run, then there can be the small timescale estimation of the evolutionary selections with then being able to run the first many generations of various resource combinations on small data sets (of large/huge population) to rapidly search for matchings, error bounds, and asymptotes on range matching, these filter out as advice for the runtime (as a runnable configuration).
Then, how to break those to ranges? Basically consideration is along the lines of completion and matching out to duplication and so on, in huge redundant processing system for amortized over scheduling execution, to output graphically and in lines out to work and resources the performance.
OK, that is for the computational framework and the composition of the computation, with having that having the beginning of exposure to transparency in for example the evolutionary framework or weighted graph framework, then there is the notion of the goal finding, the resources, the ongoing experiments, and so on.
"Theoretically, the car produces enough down force to drive upside down." -- http://www.fastcoolcars.com/saleen.htm
Then, the idea is to go about defining the data in schema. Then, to be characterizing that data, it is generally a time series experiment.
How about this I/O over selectors then, in Java, for example these are reference chains in compositions. So, the idea is to have the object return an object, that is actually referenced so many deep by a path spec instead of however many deep in indirection. All the class with compositional inheritance implement this in the convention thus in that manner the selector operates off of the roots. Then for example as this is another method maintained in convention like equals and so on, just has that from the reference selector path, it is the first item in declaration order matching the type. Then, it should also return the path schema, so that when another item comes along and references the schema, then it compiles to that one, the direct path. So there is a general path discovery mechanism and a direct path patch mechanism, to the object reference which can be a scalar or array or for the use of converting the return to that type, here primarily for that: that the path() method given the spec (or when the spec is in scope) returns the object for the selector, whatever the composition of the datatree root that is the compositional element, toward filling arrays with those reference then operating on those. Basically it could be fixed or variable so it's dynamic, because for example the completion results on the returns go back to the object locks. Also then each object would have a convenience lock, or, just a lock to set that any other object can clear, just to see if someone cleared the lock on re-entrant functions (semaphore)? Locksets, etcetera, selectors over locks. Also, for locks, have locks over the type selectors, so that unmodifying trees are free on the reference there too, though then they would combine error condition.
object requirement:
locks, roots, path components through compositional hierarchy
interface and state to monitor
Then, in resolving the compositional and inheritential components of the objects then for example the object could go from field list to associative map as the scale affords economies of scale (demands definition of "economy of scale"). Then, where the object is initially defined that way, then the object interface for this is still to have the accessors, but for example it can't have fields, then it's always an access, unless it has various generic fields when they're available in what is like a variant/union, but instead has room for each of the types, or of some types for example built-ins, or field lists, a "generic" type or prototype, or a custom type: here that is just to preserve field access where for example the object layout is fixed itself in the definition/interchange as it may be for external components (with otherwise maintaining its state rel. the framework and external components). The general notion of organization for presentation is about the synchronization and requirements of update, as to whether for example an interactive system can be cascading (asynchronous) or interlocking.
So, then the object factory, makes for each object, this object accessor. And all the accesses go through it, or example the selector is shared and that automatically makes sense in that that algorithms co-schedule (and co-refine) on the shared range. It is basically a reference counting and reference coloring mechanism. The numbers of counters and colors help define bounds.
How to concisely type to represent the path in the object? It is the schema accessor to the object, but a different object (or implementation) might map it differently, and it's dynamic (toward interoperable and interchangeable object representations). The same selector with that same object in schema identity for another schema could be various levels deep and in space and time, then the objects could work to refine their selector depth and the indirections would load locally relative each other. So, for the object, for the schema, it is the path spec: in field accessor, getter in property convention, and combinations of accesses and references through as well type correctors or type correlators on type sub-schema with algorithm specialization to types. Also would be maintaining constructor state in constructor framework that throws.
For that then generally there is the notion to have the various schema, and to have builders of those.
For example, a record type might be built naturally from a table and its relations, to define a space. This then goes up to the items framework, is that really feasible in Java? At least the arrays will have the products on them to get a readout of the product space easily, automatically variously in normal data forms.
Then, that can drive generators of the generation of the selectors over them.
Then, multiply collections of the objects together, generating their product spaces. Basically it is a plan.
Then the selector spec can be reused because it is off types in the product space, not the rows and columns.
Make this a table type? It is a TableProduct type, of a sort (often a table, tabular segmentation, block matrix, etc).
Then for the selector, have for example an address selector. (Finds link in table to address ID). So the TableProduct is built directly from the data sources and their relational schema (or annotations to schema).
Then adding them together could be appending them, or what, basically is to be worked out the product table of all the relations, working up to cyclical relations.
work in object constant space (identities, parameters)
concatenate/append:
product:
Cartesian (square, n-d, space-filling)
related: (de-/re-)normalizing to object mapping
Then, for the selectors, they start at the root of this tree, and then the result set is a computation of the bounds of the selectors and setting up the scales for the iterator passes over the data (given read/write, insert/delete/modify characteristics of the algorithm).
Because the products and selectors don't actually include the data, it's easy to keep a lot of them in memory. The data is loaded on demand.
Then, for the selector, the various algorithms schedule together on a selector, for when the pass runs over the data. Scheduling is to start and run to completion or to schedule in a time-driven system.
Now, the idea is to be able to build these selectors easily off of the product space. So, need to graphically display the product space and build of of it.
OK, going about that, the n-d product selector.
Each of the table will have its "primitive graphical representation", basically the schema rules are also defined off of the product relations, in terms of where they fit in as products. Then, there is some, primitive graphical display plane. The display plane, is basically to organize.
HTML display plane: table, (workup)
canvas display plane: zoomable dots with legend
swivel display plane: working transitions/transforms for n-d display to plane
working component layout, working back events for interactive display in re-usable and generic representations
then, basically there should be a live display plane, or it should make one.
So, there is some default display plane, all the events go through it. (log console)
Then it is in events. Basically each of the setup operations on the data, products, and selectors should emit events, compared to iterator and scalar operators that variously do (with selectors on events).
So, defining a table might be along the lines of, discovering the table from the data source, or what. Basically a data source will have automatic granularity down to its data accesses. Then for working with a data source, from all tables down to selecting each datum, those are all the TableProducts it exports. Here for example it is assumed that the referential constraints are consistent also, there is a general recursive mechanism that enumerates classes of the TableProducts of the DataSource.
"If you like pivots, you might like TableProducts. If you don't like pivots, you still might like TableProducts."
Then working down from the data source, it should create the local representation and model of the data source as it is enumerated. Now an array or a scalar is automatically a DataSource also, in local references to local TableProducts.
Then, there is the recursive enumeration of the relations of the tables, and variously for the objects, through their composition navigation lines as above. So all the objects are navigable for their objects and relational hierarchy in making from that a tree of those, and balanced trees through each with similar navigational properties. These all then run together, and, where the depth of the levels is not too great, they are dynamic and so on, and generally then it is a flexible selector for cursor model.
Then, for example to have selectors correlate variously related terms in various data sources and their composites, with rotating the components generally about, they are defined to illustrate the space term, that should work well or be generally tractable. (Working into menu bar plugin framework on selector cursor and scroll wheel.)
OK then, working up the data and selector framework, and the interactive interface, and the programmatic environment on algorithms.
Now, consider what can be accomplished with gathering statistics of the data.
About scaling and partitions, one idea for goal seeking is to work those with near-bounds cases out, working up to where the counters can be in partitions, about maintaining the dropout history.
Here basically here is this: writing statistics when expectations are met, or, when they are not met.
So, how can I bootstrap this? Basically can work arrays and declarative relations, like keys. These would be nicely set up with operator overloading, consider how to use enums to run those out to integer arithmetic for the combinations, then run them back, for actually using operator overloading in Java.
Otherwise, to be generating arrays of the selector indices, where they are not so deep for positional.
For array return type, return an object and that goes through some castable interface, ...basically make replete and interchangeable ...,
So, for built in and primitive objects, make some default reflection strategy over them. Look for get/set in convention, name matching and so on in annotation generation. Otherwise, then, how to work those into the algorithms? Basically the algorithms are to the data types of the selected items. These might have general behaviors like acting on the type of object if it exists in variously readable or writeable form. For example how to recursively add them together, that would be good. Work on generally implementing stack-free recursion throughout.
OK then: make a factory that, given an object, returns its composition and inheritance schema over type discovery. (Similarly the DataSource is discoverable.) Then, how to implement the algorithms over them? Basically they describe the types. So, then there is iteration over these trees, with for example:
1) installing literal references for telling objects apart where they might only be equal for identity
this goes into a map for the object reference, copy of map is in object to return that path compilation into the object for the schema
Then, there is to be still the forward and recursive:
1) use reflection and type discovery to enumerate composition and type in hierarchy
2) implement algorithms to work on those types, also to bring them above in the schema according to then their rule set or methods as objects of those types.
for example, might be String type in object, but, really is Component of Address or along those lines, for validation rulesets, methods (aspects)
Then, to generate the iterators, they could be, combinatorially enumerated. For this, for the general sense of, combinatorial enumerations, here it is of the path components and the combinations that tend to be patterns in the path components that return from among the objects. For example, if all of the objects are the same, as they often are in large homogeneous data sets, then it might make sense to compile the path instead of mapping the path. Then, if the pass actually makes a copy of the data (process boundary), it's reasonable to combine that with sorting on the result set, for example opportunistic ordering validation in range block transfer (opportunistic or investing, also greedy, in working out pre-compute bounds to accelerate later step in combining results, eg partial products for later merges).
OK then:
objects
discovery (recursive)
path enumeration
path reduction enumeration
Then, at each of these stages, these are to be simply, over the objects, simply adding them to these containers or declaring them in these lifecycle environments. What about standard containers? It is like built-in primitives for numeric types, how to use the standard containers interchangeably throughout, and to present them to external components (besides just the object types in the model). So for this it is good to have some way to cast it directly to one of those types, to use it then with those types. Here the selector is used on a copy pass to construct the standard library for Java, of the objects though that already exist under the selector instead of only being reference in the container. Obviously for built in types, it would be a copy, or the function might pass it as input also, with wrapping the function to copy into a new one, or to, hashcode each one, and check them for change, for update. (Auxiliary buffer generally contains a hashcode.)
OK, then there is the consideration that: generally the user data types are always defined as interfaces. Then there are constructed reasonable objects to implement them. And methods are implemented in terms of the algorithms and attached to them, often typing the string and other build-in types. Or, a default implementation of an existing user defined base type would have the implementation directed under the user defined interfaces in an annotation or directly as an inner class, with then generally getting the base type from the user defined interface, installing it as a field in the interface. Working that out to strings for operational support (ofString() ...) , this will help , ....
So, here in generics, in Java, and generally, this idea of building up the object hierarchy via interfaces, then being able to declare the default or base types, and then as well to log all their inputs right, helps to maintain user defined types interchangeably with general types, where a lot of the utility then is being able to use standard algorithms on the implementation types and their delegates, this is building up the object type above the user defined types, how without redefining the user defined types? The schema mapping allows the calls to be aligned with other APIs with similar and not same domains, in this manner, the objects are well-maintainable.
OK, feeling good about this system plan, basically there is the consideration that: the user-defined objects are declared in interfaces, that also tags methods besides fields, and getters and setters are for property types, also other categories can run off defaults (POD type eg).The user-defined objects are defined in classes, either default classes or others. Then, where the system wants to instrument aspects, it can wrap the class in an interface preserving its error handling.
Then, the interfaces are reflected and their contents enumerated, to make a tree of the references of the object. Then, these are reference counted for use in collections, and here their reference is variously maintained in the object factories. . Then, for two corpi of data, when they are to union, for example, and the reference paths are different from some types of records than the others to the relevant field, the algorithm is agnostic of it.
Then, how to use the user defined types? Consider for example a service model's types. Now, for each of them, they've had the interface defined for them. Then it would naturally match in schema the service model type (but the actual service model class type, or superinterface, is used with that client-server API, for example. Yet, as for example general services that it calls, various of the parameters would be placed or in use, and so on for the result set. Then how is the result set being bound together in the algorithm from it? Basically the client has a result type, that is among the composed and inherited types, the not derived but transformed types. So, in more reflective exhaustion of the schema, all the introduced types come into the tree. To wrap this facility, then it pretty much demands annotation or exhaustion of the introduced types and whether they're reasonable, maybe just off the existence of the interface.
OK, then for example how to get, strings of an address from an ID in the record, to the library type,
Would have some address type, somehow, it is bound over multiple columns, say, or how the string is encoded - basically am to work out a typing mechanism for all types of strings, basically a tag typing, running then phylogeny of strings. Also for path components, string representations can be optimized.
Also considerations of custom class loader.
So, for strings, and, tagging and typing them, and, still presenting them generally to external interfaces as strings.
Like Address, it is built of Address Components, that fit into a variety of addressing schemes, down to where they are numbers and strings and codes. So, any oneof those can be represented as a string, which demands string representations of all object relations. (From serialization to verbose/rich enumeration.)
Then, this reflects a general notion to make a functionary (functional) level above the object models, toward having re-baseable interface representations. (Eg tieing reference counting also to the interface and working up delegates throughout).
how about this kind of task: extract the rhythm from the music to score it to sounds
That would work off off coverting the digital samples to sounds, and from those to have component extraction, periodic and amplitudinal component extraction over time.
So, for the string recognition program: it is along these lines:
matching to recognizer dictionary
counting and so on
alphabet categorization
language closures / asymptotics
Then, the idea is along the lines of setting up the types, so they are in types, and also having dynamic types, with having the classloader and compiling the types in place, for example. Still, here there is the general definition of the selectors over the product space, this should be graphically driven with automatically saving state of each in the general expressive framework.
OK, then how to be interactive and graphically driven? This shouldn't be too bad. Basically need a graph drawing framework, into the block matrix menu selection framework, so working up cells from drawing primitives. Have for example automatic relations where the graphical layout representations are recorded when they're used with those as comparators and step partitions/boundaries, these should stack right and well and rotate-ably, generally.
Multiple axis rotational quick config parameter control, ....
eg working jmx through that through components....
then for all the algorithms they are surfaced in the component with for example standard containers and scoping those around types and transforms and so on, with then generally emitting interchange specs.
source code recognizer: recognize source code, generate templates (language templates)
OK then, in review, this effort is toward definition of:
object composition, inheritance, and implementation in one model
interchangeable objects with defining object interchange and general wrappers
maps heterogeneous schemas
reduces dependency calculations along schema product generation lines
Then, for the statistics, there are a couple ideas about where to refine and initiate the statistics. Basically model changes initiate statistics.
So, how am I getting this from Java? It is
reflection over the objects
general container and reference semantics throughout
combinatorial enumeration
graphs and path component libraries for selectors
product space definitions (write all out generally)
Allocator frameworks
There is a general consideration for the generic high-efficiency step algorithms that along the line of free lists for sequences, they are areal components so it is reasonable to consider how to rapidly estimate the upper bounds, then build the program to partition the areal resource square.
About square and long sequences, there are considerations how to treat data variously in long and short sequence. For example, data would transition from record to evolving stream.
The selector copy should co-compute the in-place shuffle, to-from, compiling that, because then the efficiency is into vector memory banks (for directly loading the vector registers, including built-in integer registers partitioned on the semi-word). Shuffle is one step (on the vector registers, but could be triangular in g.p. imp.), but merge is pyramidal (worst-case). So, work up good case to at least optimize against worst case, then work out pathologicals to specialized templates (cycle detection, model optimized). Here the range selectors are of use, eg, in following iterators, no need to generate, only maintain, any sort of a sorted range, into the general on that, with objects bounding their components in , the, address allocator integer atomization layer.
The integer atomization is basically to assign to literals various integers thus that number-theoretic operations maintain products of relations over them and then to use the machine integer instructions to generate evaluations of them. For example, to make a bag, assigns apples to 2 and oranges to 3, different prime numbers, then compose the bag by multiplying it by the integer constants for apples and oranges. To count how many apples or oranges, divide or remove until there are no more left. There are hundreds of prime numbers that in balanced quantities fit in hundreds in a 64 bit integer. The multiplication and division of integers act naturally like addition and subtraction from the bag.
This data structure has then that it returns more quickly searches for the items with the least amount in the bag. (It returns presence on one divisibility test.) To return the count of the items, it takes more time with items that have more elements in the bag, although exponential finding reduces bounds time, in counting the number to test for powers of it. When a number reaches a given bounds, the evaluation of the partitions can be put in place, where those balance the range to make it more square. The partitions are put so that adjusting the algorithm from square variously is along expectations of the distributions of the primes used (with additions/removals from the alphabet).
To store a data structure like a sequence of apples and oranges, multiply it by an apple and an orange. How to make it memoryless or analog? There are at least so many of each off of the bounds, which is logarithmically found, in terms of finding the max of how many of both and how many. How to maintain the sequence? Using addition and subtraction both, and testing the modulus each time to always keep it odd. (To keep them co-prime.) For example:
add a 2: multiply by three, x add 2, if divides by three x, reduce and set the high bit
use the separate counter prime, eg, 5, for both of them
separate prime for each step, assigns index to step, indicator for two-member language, separate prime for each step above fixed language range, what about expandable language range?t,
How to compose in Java to the statement level? Operation map? Basically consideration is along these lines, how to compile everything. Basically is dynamically compiling integer operations and tests and then mapping the resulting integers back and forth from alphabet or category maps, to evaluate membership and count various small mathematical data structures in the numeric range, with a general range framework in the semi-word.
Then, consider, for example, where it's useful for storage to have an integer instead of a 10-member 10-possibility multiset (of object references). The standard library data structure, say backed with a hashmap to count, is of the hashcode range but the existence for any possibility is constant and the count of any is 10 constants and the count of all is 10 constants, or less. Copying the multiset is a one-step integer operation, along with the context of the alphabet and its ordering and mapping to the integer representation of the integer that is copied. Now, as the multiset grows for example in count of elements, the standard library hashmap to counts is better, because the integer representations takes as many to get as that. Imagine this, however, the integer rule could just have the next prime mean two of the first member or otherwise define scaling of events (reasonably according to expectations of which there are reasonably none). Consider instead when the number of members goes to a hundred. The integer representation still has a constant membership test while the standard container builds a hash tree. Yet, as the number of members increases, of course, to maintain a useful number of counts of items, would go beyond the smaller integer into the extended precision integer representation.
Obviously then there is a case for the asymptotics that then there isn't a point to do anything except tune the hashcodes to the container content range. Yet, in the mid-range, it demonstrably is, so toward a general scaleable framework, it can be the case that an algorithm optimized for the range could unblock an otherwise asymptotically correct operation.
What operations work well when storing a vector of small integers in a machine integer word? Varying on their organization, here in constant width partitions, there is an idea to use an addition with a carry mask on the arithmetic, to be able to do modulus arithmetic on each, for addition, and then subtraction mebs, or addition then mult. In that way, it is a constant number of steps to evaluate the parallel operation over the word (sub the width of the word). Actually it is linear with the width of the word, but could work fill where generally it is an edge case (border case).
In these cases, the use of all the members in the alphabet has been assumed. For example there are 26 primes a, b, c, .... Using only apples and oranges, for example, that is 2 and the prime at that offset primes(index(o)).
What are other parallel evaluations as above?
1: move scalar elements out to vector element and aggregate (reducing, constant over width)
2. move scalar elements out to vector element and return value or range (reducing, asymmetric/asymptotic)
3. move scalar elements out to vector element, process elements vector-wise / in-place, return (constant in-vector step speedup)
4. move scalar elements out to vector element, align with I/O, emit serially
5. read elements from I/O (specialized), from memory/cache, ..., register transfer (specialized)
Here, there are operations while they are on the vector, and supporting that where:
there are multiple integers / object representations in a machine integer (eg 1)
the object spans words
where it is a word reference, then that is the scalar unit, where should/would it go, more or less (both, balanced, half statistical)
working in units generally in spatial maintenance (unit transference)
Then, work out how there are generally productive algorithms.
Promotion and escalation
promotion is when one of the symbols now needs a bit more than its partition
escalation is when all (or more than one) the symbols now need more bits in their partition
Then, that is still consideration, that this should all be naturally labelled, working up the integer codes.
constants:
natural and particularly digitally natural constants,
graph constants for cyclical detection probability (count down and out all structure)
sequence constants (eg primes as above)
OK, then working the binary naturals, is one thing, then how to go about the effective transition, among the bases, in completing the small products.
Consider for example a flow of control key. Each branch would build over whether its ever taken. Works up bits and escalates automatically. This is switch bank to flow graph. Reduces loops (maintains loop counts).
where to send up the first pass: cycle detection, or toward cycle stack filling, then working that segment back to cycle fault
basically is on the bounds count call, when the bounds count is computed for the bound residues
then, the bounds counts are generally used to reduce things to integer state machines
fitting the integers state machine into the machine word
The general purpose CPU machine word is the register transfer, instructional, and data type of the natural word width of the processor for example 32-128 bits on generally available commodity processors. For various algorithms, the states and their progressions can be concisely represented in a general format that is available and tractable to algorithm generally.
This works up for example the maintenance of multiset contents over items in small space that is portable and easily recoverable, and processable in the parallel, where the object representations are maintained as they are used. Then, the maintenance of the maximum count of items as the bound takes as much time linearly as the maximum count of items, and concentration on one item will exhaust the item but not the max, needing to sample all for the max while it is non-zero, requiring prime factorization to get the max, suggesting maintenance of the max and down. These define under the bounds as items are added or subtracted to freely get the count of all items in the prime alphabet across an integer, which can be extended.
Then basically the population samples (or proportions in full large data processing) advise the size of the trees that are maintained then for the alphabets of relevant size to reduce the object (in space) to three integers: pointer/reference, key, and data (per-object, besides the static object). Obviously the tree has empties or terminals where then the key is used to satisfy the data requirements or the member requirements for the object through the reference. Why/what then key? What if there is only one or few objects, those are then as well to evolve in this manner a concise and organized structural representation.
Generally about parallel evolution and reduction/regression
shuffle/permute (index, re-order, across/up to merge level)
placement, range, characterization, order, sorting
work up natural bounds that satisfy data dependency
co-work periodicities and linear progressions identically along bases with preservation generally
The flowgraph map is used to emit the compilation, the evaluation has the compilation instead of the full compilation, or it is conditional? It is conditional to combine. (Reducing flow graph profile, moving implementations and so on).
OK, then this is feeling more like an environment where it might be able to productively bootstrap a variety of systematic concerns for data sets.
Then, how to set up something useful? Here there is a consideration in the machine learning context, that: according to the container's use, they are optimized in various ways, in implementing dynamic container and access algorithms. Another is how the same algorithms translates to different machines and contexts, for example the Pure Java machine and the assembler machine. For Java, how, to, serially execute integer instructions, composing them generally? Notion is to implement compilation of that.
So, basically a parallel evaluation framework, toward having multiple state machines there then. What happens when they go from base 2 to base 3? Idea is to work out addressing that involves, for example, in the cases of the state machine, the copies/representatives/delegates/primaries (objects) of composition/inheritance/transform (access), to compile those up variously for objects to define structural bounds for accommodations.
one bit to two bits -> base 2 to base 4, storage for 3 and extra
how to work/plan consider carry and saturation? working range boundaries and special elements, any flow control off of sort
basically is the consideration of intermediate products and the paths to products, in terms of productive products
Now, the statistics are to be generated, concisely.
tree evolves <-> statistics evolve
Now, this leads to the multi-ragged very much. The tree is for example for each function, its imprint on each function it calls, and for each function how much it takes. Here the statistics are general, opportunistic, and ubiquitous, or rather vice versa.
ubiquitous:
change to synthetic value <- random variable
change of state <- all relevant variables are r.v.s, many constant
change of derived value <- function of random variable, work to remove bias
Then, general it is good to maintain general estimators, and bases of reference for transforms among components
Working in floating point values, sometimes they share resources with the integer state machines and numerics framework, other times they are used for the general accoutrement according their type. About rotational transformation and so on, that is generally maintained in the angular, in that manner toward having error -free component bases, towards that diffracting integration is minimalized or accounted. Floating point: very useful for neural net bases, with asymptote detection (bounds detection, cyclical detection).
Generally available progressive codes for forward machine representation: here the general notion is to represent the encoding so that, for example in that section of the flowgraph, that the state result is an immediate to the evaluation, reducing code size and cost of state machine maintenance. Useful again with parallelizing coding to other components, because the machines are coded in integers.
OK, that is a fine idea. Yet, what's the point, in scaling that, besides scaling model functionality?
OK then, for a language, is one thing, there is a consideration to store for each variable its range and so on, and also declaring for otherwise integer or string type variables what are their ranges.
Then, for these parallel and flood evolvers and steppers, there are all the natural periodic partitions and boundaries, those should have generators, and then off a few forward steps of that, deductively reconstructed, splicing out a regular boundary case to establish periods. Work those up and down the semi-words. Basically the word is useful, but it has a general escape, so interchange in the semi-word might facilitate word state in the word (or just tag words in the block sub-word).
Now, this example of using the integer tag atomization library has then as part of the object framework, there are to be ways then that the objects are variably baseable, and that runs can be pre-computed so that the selectors and algorithms combine and that the corresponding space-time diagram can be fit to available service, resource, and scheduling concerns.
OK then, back to basics: does this mean rewriting algorithms and data structures in this new form? Partially yes. Largely algorithms are to be rewritten (or ported). Data structures should follow naturally from their sources, and their classes are maintained. What about writing algorithms, and re-writing algorithm? One idea is to write the algorithm in the form, in as to where it is compiled into that algorithm template. Algorithms should be strictly reduced in language terms of isometric or isomorphic algorithms in template, although of course those are generally filled with specializers and to be analyzed, then algorithm instances are generally referenceable.
Then for example an algorithm should be defined as a reduction or along the lines of a product form
min max
sum count
sort
search
library call
The search algorithms are close with the selectors.
average mode
partitions and boundaries, scales and scalar character, these are to be worked up over distributions toward that in terms of distributions, and natural partitions over them, that variously it is extracted linear and log linear components with model fitting that is key
For that then, need to have this numeric model of a distribution, and what it means, for a given distribution, with numeric samples relative that distribution of family of distributions, or relative the distribution and its parameters. So, the distributions are pure distributions, mathematically, how to represent them digitally?
distribution: f(n->R[0,1]) pdf, CDF, MGF
Then, maybe can work moments into generic range partitioning.
Alright then great: some notion of a distribution. Those are discrete distributions. Then there are real-valued distributions of both non-negative and all positive and negative real numbers. Then the idea is that: for the given expected distributions, it is to be determined, in terms of computational space and time, the cost of the development and maintenance of statistical information. There is the general consideration to work up these canonical orderings of named distributions, and to use those to work around the centralization of moments. For example, add a relation, it automatically generates statistics about the related items, and in summary for example generates statistics opportunistically on a pass over the data. Then, it could be simply pass driven, the storage of samples in maaaassive redundancy, and then to bucket everything into statistical refinement while waiting for the schedule pass or run, compared to the local refinement processing (eg also driving change runs). Now, the samples are integer valued, but the parameters are real valued, or fractionally valued, say. That gets into the fractional maintenance, for cases where the extensive storage allows recomposition of exact integer valued results in scale.
Then, it seems critical to define the centralizing and non-centralizing distributions, in terms of that, as a general abstraction, maybe it should be considered to implement the algorithms on them, to define various products and completion groups throughout. In this case it is about separating linear projection and clustering. Basically the neighbors are to work up their neighbors and store as above to work out distances and clusters. Then in merges they would double combine and then could be separated along the current axis perpendicular projecting through the connection of the vector segments of the overall vector. Eh, lots of machinery that in a direct case could be implemented efficiently.
So, how many samples to collect? Idea is: to not weight down processing. If there are a 1000 samples maintained, to re-run the integration on each sample is a 1000x slow-down. Yet, where it is 1x, sometimes it can be pipeline and parallel. Then there's a consideration as to why and when to maintain samples. Basically it is partially a consideration of the cost of the computation of approximative and particularly recoverably approximative numbers. Then there's a consideration of a general statistical refinement, totally of a sort. In that sense each change of value is an event, so it has to be read out twice, as value and statistic, unless the same algorithm carries value and statistic. Then where it's a new sample it's free to carry when it's a refinement it squares (doubles).
Then, what are the statistical data structures?
value, machine integer, f.p., and 2's complement layout
mean, min, max, order statistics (is cheap order statistics in cheap space)
basically extends the selectors and TableProduct products, to get their statistics (for example what is built up in the neighbor trees to evaluate in two steps n^2, squaring steps <-> powering steps)
basically values form the vector of a random variable themselves, but then what are their expectations? here often the expectations help to define the range.
then, there's consideration on working the range, and parameterizations of the range, with distributions, and parameterizations of the distributions.
Then, the min max for example form the same range. And, then the count of samples is as well considered.
Then, in the selectors and trees, various of these might be maintained, for example contingently on other events that are known to have an association to it.
Then, the changes to the variables effect the changes to the statistics, and the changes on the selectors and trees change which statistics are maintained/refined generated, which take arbitrary time and space terms to effect. Basically then this will have general tasks in the assimilative and associative in data and from that build up what statistics are relevant. Also there is a notion that over time, it's possible to select which statistics to maintain in terms of the various bounds adjustments toward that statistics expected to maintain information are preserved.
OK then, that goes well, in generating these kinds of things for the data structures:
for data that is eventually to be sorted, to be generating the order statistics
what this would have, for example, in data that can be tabularized, sorting indices for its columns, in presenting those to selector frameworks that then iterate in that order for compiling iterators so iterators don't lose the object reference which is tagged into the eventual iterator.
for values, to be generating the range statistics
this would have, for example, grouping and coloring of the components
value change:
update statistics (update chain on value)
update range (update range on value)
compute what the change does for dependencies. for example it might be that the range finds it out of its sort bracket or anything else contingent on the value. then that cascades to mark (and maybe compute, and whether it is mark or compute) the related trees that would be invalidated, up to gross invalidation that would recompute..., basically then the algorithms selectors are to work those are up, in general re-use of selectors and iterators over those generally be evaluated
OK then great. Now, it is a computing framework, some of the changes in the values are knowns, in the sense of defined behavior without arbitrary input they're known. For the unknowns, or data, these are relevant as samples of an unknown population. So, there are samples of known and unknown populations, in the sense that, the range of all strings defines a language but not a tractable one. So, each input should initiate a new language, in a sense. Or, strings can be in many languages, how to structurally organize the languages. OK, partially that's off dictionary branches. Simlarly the pointers and references in literals, they're in ranges. For each character it's in a range, for each string it's in and has a range, also the strings can be generally tokenized. This basically has the string type and typing as part of the type transference framework. Then, after the fact, the languages might be defined in the deductive rules and synthetically combined. As well they should freely associate in dictionary fragments (Boolean).
Defining language rules, use regular expressions, with the notion that they export their runtime characteristics, and how much of the string they match or the rate, where for example the literal subsequence matches go into stepping and co-presence in ranges, basically with that many rules are evaluated at once on strings generally to evaluate their contents
about the co-presence in ranges, is about, maintaining population statistics for a range. Again the question arises, when to maintain forever those, and when to attenuate them? Should maintain the co-presence indicator fields that hopefully evaluate in one step there, how it should be is toward optimizing toward the component merges and changes, with still keeping that (basically working up counters to bounds and filling, for tiling).
OK, great, then working up:
ranges
variables
distributions
selectors
then, consider, when an algorithm exits. Then the statistics can run out asynchronously. Then what about when they are still computing and the function is re-entered? Well then the machinery should run to that, the re-entrant function should be to the statistics stepper. Then, as much can be maintain, is, and the function has its statistics generated according to how often its called, more often: less statistics, less often, more statistics. (Also can be per-caller, per-thread, besides per-call).
ok, then for example where the function templates are re-used, then the function instances will be geared toward structural support of the algorithms. That way, for the given data that goes the algorithm, the statistics about it going through that function will be carried with it, besides the selector correlations on the range correlations.
Then, how so forward to proceed? It is the regular question. Basically random variables are naturally defined: which are of interest. How to go about generally journaling their statistics, that is one thing. With time series, it is along these lines: instead of a timestamp with a time, all the objects that are defined in the same logical timestamp get the same logical timestamp. The timestamp is actually used later in time series data. Point is to work them into ranges, and also to quantize them into time ranges. So, to simply collect the samples in a vector with their timestamp or context stamp allows then their general treatment as the samples of the "population" of that r.v. Now, the r.v.'s have separate lfetimes in for example each program invocation. So, it is reasonable to assume they have distinct distributions each program invocation. Obviously with identical inputs they're same. The r.v. has also the lifetime across program invocations. This gets into the re-use, and how also the re-use will be automatically algorithmic and below the level of where the r.v.'s have the lifecycle that varies on the r.v.'s that go into the algorithm, so, the statistics should go with the variable, and any dependent variable _is_ a function of the partial variable (jointly with all components as should be read-out). So, for any function, basically it should have the closure of the variables, so that their pairwise-possible evaluations are read up, and also if they are passed to methods together, and it is upward indicated that they are either mutually interdependent or that another is dependent on them. (Eg, variables might condition result, but variables as literal to result is wrong or rather simply gets into piece-wise composition.)
Another general facility of the distributions is their piece-wise composition. This is basically adjustment of the conditional, and about how the distribution is only meaningful as a product contingent on the other, but as well could be of the same as the main. So, the distributions on each flow of control statement and assignment and expression need to be evaluated, that is why there is the high level definition and that still external algorithms can be used with layout control.
So, each r.v. has a distribution, with parameters for families of distributions (compared to say point-wise, vector parameter compared to smooth). For all its language its of each of those until it censors. That is with the range and the alphabet. Ranges are intrinsic to numerics but apply to alphabets and also for what alphabets are used for in distinct individuals in the population. Then, in terms of what samples to maintain, it's to get efficient estimators of the parameters of those variables, also of those as random variables. For model fitting, there are rough models of each of many known distributions different under various transformations of the scale parameters to fit them variously, then the statistics are maintained and computed according to computing whether the data is likely to match that distribution in for example a scale-invariant manner in co-projection.
Discrete distributions, generally sampling with replacement, but also will be rating and so on in the extraction of periodic components and features
working exhaustion, like waiting for the next of the search result to complete the range, eg passing the better guess on the fill
basically working those on pairs/jointly, eg binomial on matches for successive terms and also for matching along equality and similarity lines, working indicators and Bernoulli
Then, working on the discrete probabilities, it is about efficiently building the joint, particularly while it is serial eg in in-place routine use of vector registers or local scaling capability
Why work with joint probability distributions as having the square range instead of them maintaining their co-components joint each other? Basically computing the joint PDF from the population, it has the sampling effects in the result. Those might be bounds instead of constants. When evaluating the j.p.d.f., why, is it more productive to preserve the other components, in terms of where: the point of having joint p.d.f.'s is either there is correlation (eg piece-wise, that would indicate or be indicated by the language transform generally), or, when they are off partitions, the sampling ranges.
Then, for these various distributions, the idea is to estimate what would be the distributions off of the samples. Then, use that to get a function, and take that function and invert it. Here the idea is that then advise partitions off of the CDF in terms of the uniform layout of the resources. For example, the CDF of a uniform distribution would have that, the partitioning would match the expected maximum. Then the next sample is readily bracketed in distributions with means for rejection.
The pdf is the probability that, there is the event
The inverse is the collection of class with value that one probability, that's not so much what is wanted, but what is the area or range around it.
For example of 1/6 is 1,6 2,5 3,4 4,3 5,2 6,1 for 7 in 2d6, 1/6 part. the 1st order statistic of that.
For example, using the multinomial, to estimate the probability of given items in a sample, or to get back an ordering statistics down over the maxima and figuring out how to sort the maxima.
About algorithm and cascade, there are sometimes blocking and non-blocking, and also ones that make sense to send speculatively. For example a time-scheduled step might be completed ahead of time, send it with the outgoing batch for the next bucket at the midnight window, in the next if it's forward it could go up, so the notion is to advise the algorithm: yes advise this event in case it might be read ahead of time, else drop it, then whether it could be sent or read it could.
If there is a joint PDF then the marginals can be read off from summing over the various combinations of the others.
Then, there are the constituent pieces of the counting frameworks, idea is to maintain those that are efficient in the arithmetic that connects the scalar and block vector. For example, for a field record, it has its counts. For those in a row, it satisfies the range summaries. So it's worthwhile to maintain directly the collected range counts in the range instead of the record, for example re-evaluating the record for it specifically but maintaining the count summaries across the records in the range. In that sense as cacheing the value on changes to the fields, it makes sense for the summaries to be abstracted into the ranges. Then for example if the range is evaluated and rejected on the summary, the item is not evaluated. Then the summary can have statistics that represent drop-outs (that are codes beyond the language space that indicate in the formula the requirement to re-evaluate the objects in the range to get a summary) where here a drop-out is basically a composition of the abstraction above, on the distributions, then as part of the language and scanner and parser machinery.
a) useful, in terms of how many bits they'd take to represent or that in forms their use is part of efficiency
b) space-filling, in terms of that for computing them to be ready, the expected gain exists to use them
c) generally dynamic, in terms of that the algorithms are requirements bound, and adaptive throughout
Hmm, thinking here on using the vector registers on the commodity hw, should use "just enough" or "all". Here the idea is to answer the question of when to attenuate sampling? and it's when in terms of the local map of node resources or actually the map of node resources where the task is distributable, to then including all costs to make use of all the paid-for resources. Basically make use of the voluminous resources at least in cleaning up, then for the re-entrant functions they are automatically defining their attenuation bounds, with that the operations profiles conditions the statistical profile, and about in re-use of functions how they are cloned. Yet, it is not just space resources of the routine also the cache/locale.
Also for the register scheduling, the idea is that use of the smaller registers has expansions that fill the larger registers on the algorithm, with cascading algorithms into and out of the abstract register space. The abstract register space or ARS is a convenient representation, the abstract register space is a root component in the processing along resource lines. On virtual machines, might only be to line or expression on breaks, still is about definition of behavior along general resource lines. This requires maintenance of units as natural weights in the framework. The register is basically a storage location for the result of an atomic operation. It's intrinsic and machine operation is implemented itself in the general purpose sense of arithmetical logic units (ALUs) for integer operation and here in the commodity: vector register banks that are generally not being used by other applications in the resource time slices, or rather, in this case they are: about whether to double the registers in packing (along merge and insert/delete), or sampling (storage of sample data).
An abstraction of storage is then of the sample, for example, it could just be a clone or whatever exists as an in-state copy. The storage could be a pointer to the object. Then the object stores itself. Then it might select along wakeup a code that has for its storage for example its own overall static serialization method for a read-only read-out. Or it could component the parcel and simply in the composition framework read it out, if, for example, it only stored what was ever read from it.
In this manner along the lines of being able to rapidly and purposably recompose product line recompositions, general repurposability is along: that inspection and sampling in dynamic lines establish composition and treatment (eg worksheets and fill-ins).
Tuesday, May 3, 2011
Polydimensional
http://www.springerlink.com/content/h544054565255255/
That looks interesting. "Clifford-Algebra Polydimensional Relativity and Relativistic Dynamics". Majev Pavschick(sh). As you can read from the abstract the author indicates that the inclusion of the polydimensional in the computation of relativistic effects readily reduces plain mathematics.
Nice, still at it.
http://www-f1.ijs.si/~pavsic/
That looks interesting. "Clifford-Algebra Polydimensional Relativity and Relativistic Dynamics". Majev Pavschick(sh). As you can read from the abstract the author indicates that the inclusion of the polydimensional in the computation of relativistic effects readily reduces plain mathematics.
Nice, still at it.
http://www-f1.ijs.si/~pavsic/
Sunday, October 31, 2010
Mathematical resources on Internet
http://www.math-atlas.org/ Mathematical Atlas: A gateway to Modern Mathematics
This is a really great website, Dave Rusin's Math Atlas is a great compendium of reading, and knowledge. I recommend it for anybody who wants to learn or enjoy mathematics.
This is a really great website, Dave Rusin's Math Atlas is a great compendium of reading, and knowledge. I recommend it for anybody who wants to learn or enjoy mathematics.
Thursday, October 7, 2010
Is "all categorical reasoning formally contradictory?" - via MathOverflow
I enjoy this article, it's discussed some limitations and directions for progress in the modern mathematics about how it's been discovered that modern mathematics isn't quite totally suitable for modern mathematics. (Of course that's rather strong.)
I enjoy it because it helps to bolster some few discussions I've promoted, that before were contentious, that I defended in their development on basic principles, and now don't have to anymore, standing for themselves among these others. It helps a lot that in some of these long-running discussions, it is somewhat more de rigueur these days to consider these features, there are new mathematics.
They're still looking for solutions to these things, or rather, still finding there are these features of the numbers that must be reconciled, out toward infinity and back. Luckily for me I already did, look.
I enjoy this article, it's discussed some limitations and directions for progress in the modern mathematics about how it's been discovered that modern mathematics isn't quite totally suitable for modern mathematics. (Of course that's rather strong.)
I enjoy it because it helps to bolster some few discussions I've promoted, that before were contentious, that I defended in their development on basic principles, and now don't have to anymore, standing for themselves among these others. It helps a lot that in some of these long-running discussions, it is somewhat more de rigueur these days to consider these features, there are new mathematics.
They're still looking for solutions to these things, or rather, still finding there are these features of the numbers that must be reconciled, out toward infinity and back. Luckily for me I already did, look.
Subscribe to:
Posts (Atom)