minor spelling tweaks

Closes tensorflow/mlir#290

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/290 from kiszk:spelling_tweaks_201912 9d9afd16a723dd65754a04698b3976f150a6054a
PiperOrigin-RevId: 284169681
This commit is contained in:
Kazuaki Ishizaki 2019-12-06 05:58:59 -08:00 committed by A. Unique TensorFlower
parent 58adf99ed1
commit 84a6182ddd
19 changed files with 97 additions and 95 deletions

View file

@ -375,7 +375,7 @@ private:
return mlir::success();
}
/// Emit a coinstant for a literal/constant array. It will be emitted as a
/// Emit a constant for a literal/constant array. It will be emitted as a
/// flattened array of data in an Attribute attached to a `toy.constant`
/// operation. See documentation on [Attributes](LangRef.md#attributes) for
/// more details. Here is an excerpt:

View file

@ -259,9 +259,9 @@ def : Pat<(AOp $input, $attr), (COp (AOp $input, $attr) $attr)>;
`AOp` is generated via a nested result pattern; DRR won't be able to deduce the
result type for it. A custom builder for `AOp` should be defined and it should
deduce the result type by itself. The builder should have the a separate
parameter for each operand and attribute and deduce the result type internally
by itself. For example, for the above `AOp`, a possible builder is:
deduce the result type by itself. The builder should have the separate parameter
for each operand and attribute and deduce the result type internally by itself.
For example, for the above `AOp`, a possible builder is:
```c++
@ -311,9 +311,10 @@ def DOp : Op<"d_op"> {
def : Pat<(AOp $input, $ignored_attr), (DOp (BOp:$b_result) $b_result)>;
```
In this pattern, a `AOp` is matched and replaced with a `DOp` whose two operands
are from the result of a single `BOp`. This is only possible by binding the
result of the `BOp` to a name and reuse it for the second operand of the `DOp`
In this pattern, an `AOp` is matched and replaced with a `DOp` whose two
operands are from the result of a single `BOp`. This is only possible by binding
the result of the `BOp` to a name and reuse it for the second operand of the
`DOp`
#### `NativeCodeCall`: transforming the generated op

View file

@ -87,7 +87,7 @@ memory buffers at the module level, we chose to do it at the function level to
provide some structuring for the lifetime of those buffers; this avoids the
incentive to use the buffers for communicating between different kernels or
launches of the same kernel, which should be done through function arguments
intead; we chose not to use `alloca`-style approach that would require more
instead; we chose not to use `alloca`-style approach that would require more
complex lifetime analysis following the principles of MLIR that promote
structure and representing analysis results in the IR.

View file

@ -60,16 +60,17 @@ allowed in a TableGen file (typically with filename suffix `.td`) can be found
[here][TableGenIntro]. The formal language specification can be found
[here][TableGenRef]. _Roughly_ speaking,
* TableGen `class` is similar to C++ class; it can be templated and subclassed.
* TableGen `def` is similar to C++ object; it can be declared by specializing
a TableGen `class` (e.g., `def MyDef : MyClass<...>;`) or completely
independently (e.g., `def MyDef;`). It cannot be further templated or
subclassed.
* TableGen `dag` is a dedicated type for directed graph of elements. A `dag`
has one operator and zero or more arguments. Its syntax is `(operator arg0,
arg1, argN)`. The operator can be any TableGen `def`; an argument can be
anything, including `dag` itself. We can have names attached to both the
operator and the arguments like `(MyOp:$op_name MyArg:$arg_name)`.
* TableGen `class` is similar to C++ class; it can be templated and
subclassed.
* TableGen `def` is similar to C++ object; it can be declared by specializing
a TableGen `class` (e.g., `def MyDef : MyClass<...>;`) or completely
independently (e.g., `def MyDef;`). It cannot be further templated or
subclassed.
* TableGen `dag` is a dedicated type for directed acyclic graph of elements. A
`dag` has one operator and zero or more arguments. Its syntax is `(operator
arg0, arg1, argN)`. The operator can be any TableGen `def`; an argument can
be anything, including `dag` itself. We can have names attached to both the
operator and the arguments like `(MyOp:$op_name MyArg:$arg_name)`.
Please see the [language introduction][TableGenIntro] to learn about all the
types and expressions supported by TableGen.
@ -214,13 +215,13 @@ places like constraints.
To declare a variadic operand, wrap the `TypeConstraint` for the operand with
`Variadic<...>`.
Normally operations have no variadic operands or just one variadic operand.
For the latter case, it is easily deduce which dynamic operands are for the
static variadic operand definition. But if an operation has more than one
variadic operands, it would be impossible to attribute dynamic operands to the
Normally operations have no variadic operands or just one variadic operand. For
the latter case, it is easy to deduce which dynamic operands are for the static
variadic operand definition. But if an operation has more than one variadic
operands, it would be impossible to attribute dynamic operands to the
corresponding static variadic operand definitions without further information
from the operation. Therefore, the `SameVariadicOperandSize` trait is needed
to indicate that all variadic operands have the same number of dynamic values.
from the operation. Therefore, the `SameVariadicOperandSize` trait is needed to
indicate that all variadic operands have the same number of dynamic values.
#### Optional attributes
@ -776,7 +777,7 @@ duplication, which is being worked on right now.
### Enum attributes
Some attributes can only take values from an predefined enum, e.g., the
comparsion kind of a comparsion op. To define such attributes, ODS provides
comparison kind of a comparison op. To define such attributes, ODS provides
several mechanisms: `StrEnumAttr`, `IntEnumAttr`, and `BitEnumAttr`.
* `StrEnumAttr`: each enum case is a string, the attribute is stored as a
@ -1042,53 +1043,54 @@ possible).
We considered the approaches of several contemporary systems and focused on
requirements that were desirable:
* Ops registered using a registry separate from C++ code.
* Unknown ops are allowed in MLIR, so ops need not be registered. The
ability of the compiler to optimize those ops or graphs containing those
ops is constrained but correct.
* The current proposal does not include a runtime op description, but it
does not preclude such description, it can be added later.
* The op registry is essential for generating C++ classes that make
manipulating ops, verifying correct construction etc. in C++ easier by
providing a typed representation and accessors.
* The op registry will be defined in
[TableGen](https://llvm.org/docs/TableGen/index.html) and be used to
generate C++ classes and utility functions
(builder/verifier/parser/printer).
* TableGen is a modelling specification language used by LLVM's backends
and fits in well with trait based modelling. This is an implementation
decision and there are alternative ways of doing this. But the
specification language is good for the requirements of modelling the
traits (as seen from usage in LLVM processor backend modelling) and easy
to extend, so a practical choice. If another good option comes up, we
will consider it.
* MLIR allows both defined and undefined ops.
* Defined ops should have fixed semantics and could have a corresponding
reference implementation defined using, for example, EDSC.
* Dialects are under full control of the dialect owner and normally live
with the framework of the dialect.
* The op's traits (e.g., commutative) are modelled along with the op in
the registry.
* The op's operand/return type constraints are modelled along with the op in
the registry (see [Shape inference](#shape-inference) discussion below),
this allows (e.g.) optimized concise syntax in textual dumps.
* Behavior of the op is documented along with the op with a summary and a
description. The description is written in markdown and extracted for
inclusion in the generated LangRef section of the dialect.
* The generic assembly form of printing and parsing is available as normal,
but a custom parser and printer can either be specified or automatically
generated from an optional string representation showing the mapping of the
"assembly" string to operands/type.
* Parser-level remappings (e.g., `eq` to enum) will be supported as part
of the parser generation.
* Matching patterns are specified separately from the op description.
* Contrasted with LLVM there is no "base" set of ops that every backend
needs to be aware of. Instead there are many different dialects and the
transformations/legalizations between these dialects form a graph of
transformations.
* Reference implementation may be provided along with the op definition.
* The reference implementation may be in terms of either standard ops or
other reference implementations.
* Ops registered using a registry separate from C++ code.
* Unknown ops are allowed in MLIR, so ops need not be registered. The
ability of the compiler to optimize those ops or graphs containing those
ops is constrained but correct.
* The current proposal does not include a runtime op description, but it
does not preclude such description, it can be added later.
* The op registry is essential for generating C++ classes that make
manipulating ops, verifying correct construction etc. in C++ easier by
providing a typed representation and accessors.
* The op registry will be defined in
[TableGen](https://llvm.org/docs/TableGen/index.html) and be used to
generate C++ classes and utility functions
(builder/verifier/parser/printer).
* TableGen is a modelling specification language used by LLVM's backends
and fits in well with trait-based modelling. This is an implementation
decision and there are alternative ways of doing this. But the
specification language is good for the requirements of modelling the
traits (as seen from usage in LLVM processor backend modelling) and easy
to extend, so a practical choice. If another good option comes up, we
will consider it.
* MLIR allows both defined and undefined ops.
* Defined ops should have fixed semantics and could have a corresponding
reference implementation defined using, for example, EDSC.
* Dialects are under full control of the dialect owner and normally live
with the framework of the dialect.
* The op's traits (e.g., commutative) are modelled along with the op in the
registry.
* The op's operand/return type constraints are modelled along with the op in
the registry (see [Shape inference](#shape-inference) discussion below),
this allows (e.g.) optimized concise syntax in textual dumps.
* Behavior of the op is documented along with the op with a summary and a
description. The description is written in markdown and extracted for
inclusion in the generated LangRef section of the dialect.
* The generic assembly form of printing and parsing is available as normal,
but a custom parser and printer can either be specified or automatically
generated from an optional string representation showing the mapping of the
"assembly" string to operands/type.
* Parser-level remappings (e.g., `eq` to enum) will be supported as part
of the parser generation.
* Matching patterns are specified separately from the op description.
* Contrasted with LLVM there is no "base" set of ops that every backend
needs to be aware of. Instead there are many different dialects and the
transformations/legalizations between these dialects form a graph of
transformations.
* Reference implementation may be provided along with the op definition.
* The reference implementation may be in terms of either standard ops or
other reference implementations.
TODO: document expectation if the dependent op's definition changes.

View file

@ -122,7 +122,7 @@ An analysis may provide additional hooks to control various behavior:
Given a preserved analysis set, the analysis returns true if it should truly be
invalidated. This allows for more fine-tuned invalidation in cases where an
analysis wasn't explicitly marked preserved, but may be preserved(or
analysis wasn't explicitly marked preserved, but may be preserved (or
invalidated) based upon other properties such as analyses sets.
### Querying Analyses

View file

@ -510,7 +510,7 @@ struct FuncOpConversion : public LLVMLegalizationPattern<FuncOp> {
attributes.push_back(attr);
}
// Create an LLVM funcion, use external linkage by default until MLIR
// Create an LLVM function, use external linkage by default until MLIR
// functions have linkage.
auto newFuncOp = rewriter.create<LLVM::LLVMFuncOp>(
op->getLoc(), funcOp.getName(), llvmType, LLVM::Linkage::External,

View file

@ -71,7 +71,7 @@ mlir::spirv::getEntryPointABIAttr(ArrayRef<int32_t> localSize,
Type SPIRVTypeConverter::getIndexType(MLIRContext *context) {
// Convert to 32-bit integers for now. Might need a way to control this in
// future.
// TODO(ravishankarm): It is porbably better to make it 64-bit integers. To
// TODO(ravishankarm): It is probably better to make it 64-bit integers. To
// this some support is needed in SPIR-V dialect for Conversion
// instructions. The Vulkan spec requires the builtins like
// GlobalInvocationID, etc. to be 32-bit (unsigned) integers which should be
@ -189,7 +189,7 @@ static spirv::GlobalVariableOp getBuiltinVariable(spirv::ModuleOp &moduleOp,
return nullptr;
}
/// Gets name of global variable for a buitlin.
/// Gets name of global variable for a builtin.
static std::string getBuiltinVarName(spirv::BuiltIn builtin) {
return std::string("__builtin_var_") + stringifyBuiltIn(builtin).str() + "__";
}
@ -230,7 +230,7 @@ getOrInsertBuiltinVariable(spirv::ModuleOp &moduleOp, Location loc,
}
/// Gets the global variable associated with a builtin and add
/// it if it doesnt exist.
/// it if it doesn't exist.
Value *mlir::spirv::getBuiltinVariableValue(Operation *op,
spirv::BuiltIn builtin,
OpBuilder &builder) {

View file

@ -270,7 +270,6 @@ private:
// block and redirect all branches to the old header block to the old
// merge block (which contains the spv.selection/spv.loop op now).
/// For OpPhi instructions, we use block arguments to represent them. OpPhi
/// encodes a list of (value, predecessor) pairs. At the time of handling the
/// block containing an OpPhi instruction, the predecessor block might not be
@ -278,7 +277,7 @@ private:
/// the block argument from the predecessors. We use the following approach:
///
/// 1. For each OpPhi instruction, add a block argument to the current block
/// in construction. Record the block argment in `valueMap` so its uses
/// in construction. Record the block argument in `valueMap` so its uses
/// can be resolved. For the list of (value, predecessor) pairs, update
/// `blockPhiInfo` for bookkeeping.
/// 2. After processing all blocks, loop over `blockPhiInfo` to fix up each

View file

@ -1116,7 +1116,7 @@ void ModulePrinter::printType(Type type) {
//===----------------------------------------------------------------------===//
namespace {
/// This class provides the main specialication of the DialectAsmPrinter that is
/// This class provides the main specialization of the DialectAsmPrinter that is
/// used to provide support for print attributes and types. This hooks allows
/// for dialects to hook into the main ModulePrinter.
struct CustomDialectAsmPrinter : public DialectAsmPrinter {

View file

@ -689,7 +689,7 @@ SourceMgrDiagnosticVerifierHandler::SourceMgrDiagnosticVerifierHandler(
for (unsigned i = 0, e = mgr.getNumBuffers(); i != e; ++i)
(void)impl->computeExpectedDiags(mgr.getMemoryBuffer(i + 1));
// Register a handler to verfy the diagnostics.
// Register a handler to verify the diagnostics.
setHandler([&](Diagnostic &diag) {
// Process the main diagnostics.
process(diag);

View file

@ -286,7 +286,7 @@ void Operation::destroy() {
/// Return the context this operation is associated with.
MLIRContext *Operation::getContext() { return location->getContext(); }
/// Return the dialact this operation is associated with, or nullptr if the
/// Return the dialect this operation is associated with, or nullptr if the
/// associated dialect is not registered.
Dialect *Operation::getDialect() {
if (auto *abstractOp = getAbstractOperation())

View file

@ -283,7 +283,7 @@ static Optional<WalkResult> walkSymbolUses(
if (walkSymbolRefs(&op, callback).wasInterrupted())
return WalkResult::interrupt();
// If this operation has regions, and it as well as its dialect arent't
// If this operation has regions, and it as well as its dialect aren't
// registered then conservatively fail. The operation may define a
// symbol table, so we can't opaquely know if we should traverse to find
// nested uses.

View file

@ -323,7 +323,7 @@ void PassTiming::runAfterPass(Pass *pass, Operation *) {
return;
}
// Adapator passes aren't timed directly, so we don't need to stop their
// Adaptor passes aren't timed directly, so we don't need to stop their
// timers.
if (!isAdaptorPass(pass))
timer->stop();

View file

@ -1561,10 +1561,10 @@ public:
!canFuseSrcWhichWritesToLiveOut(srcId, dstId, srcStoreOp, mdg))
continue;
// Dont create a private memref if 'writesToLiveInOrOut'.
// Don't create a private memref if 'writesToLiveInOrOut'.
bool createPrivateMemref = !writesToLiveInOrOut;
// Dont create a private memref if 'srcNode' has in edges on 'memref',
// or if 'dstNode' has out edges on 'memref'.
// Don't create a private memref if 'srcNode' has in edges on
// 'memref', or if 'dstNode' has out edges on 'memref'.
if (mdg->getIncomingMemRefAccesses(srcNode->id, memref) > 0 ||
mdg->getOutEdgeCount(dstNode->id, memref) > 0) {
createPrivateMemref = false;

View file

@ -265,7 +265,7 @@ func @failedOperandSizeAttrWrongTotalSize(%arg: i32) {
// -----
func @failedOperandSizeAttrWrongCount(%arg: i32) {
// expected-error @+1 {{'operand_segment_sizes' attribute for specifiying operand segments must have 4 elements}}
// expected-error @+1 {{'operand_segment_sizes' attribute for specifying operand segments must have 4 elements}}
"test.attr_sized_operands"(%arg, %arg, %arg, %arg) {operand_segment_sizes = dense<[2, 1, 1]>: vector<3xi32>} : (i32, i32, i32, i32) -> ()
}
@ -315,7 +315,7 @@ func @failedResultSizeAttrWrongTotalSize() {
// -----
func @failedResultSizeAttrWrongCount() {
// expected-error @+1 {{'result_segment_sizes' attribute for specifiying result segments must have 4 elements}}
// expected-error @+1 {{'result_segment_sizes' attribute for specifying result segments must have 4 elements}}
%0:4 = "test.attr_sized_results"() {result_segment_sizes = dense<[2, 1, 1]>: vector<3xi32>} : () -> (i32, i32, i32, i32)
}

View file

@ -1439,7 +1439,7 @@ void OpEmitter::genVerifier() {
auto sizeAttr = getAttrOfType<DenseIntElementsAttr>("{0}");
auto numElements = sizeAttr.getType().cast<ShapedType>().getNumElements();
if (numElements != {1}) {{
return emitOpError("'{0}' attribute for specifiying {2} segments "
return emitOpError("'{0}' attribute for specifying {2} segments "
"must have {1} elements");
}
)";

View file

@ -685,7 +685,7 @@ std::string PatternEmitter::handleReplaceWithNativeCodeCall(DagNode tree) {
}
for (int i = 0, e = tree.getNumArgs(); i != e; ++i) {
attrs[i] = handleOpArgument(tree.getArgAsLeaf(i), tree.getArgName(i));
LLVM_DEBUG(llvm::dbgs() << "NativeCodeCall argment #" << i
LLVM_DEBUG(llvm::dbgs() << "NativeCodeCall argument #" << i
<< " replacement: " << attrs[i] << "\n");
}
return tgfmt(fmt, &fmtCtx, attrs[0], attrs[1], attrs[2], attrs[3], attrs[4],
@ -769,7 +769,7 @@ std::string PatternEmitter::handleOpCreation(DagNode tree, int resultIndex,
if (isSameOperandsAndResultType || useFirstAttr) {
// We know how to deduce the result type for ops with these traits and we've
// generated builders taking aggregrate parameters. Use those builders to
// generated builders taking aggregate parameters. Use those builders to
// create the ops.
// First prepare local variables for op arguments used in builder call.
@ -891,7 +891,7 @@ void PatternEmitter::supplyValuesForOpArgs(
Operator &resultOp = node.getDialectOp(opMap);
for (int argIndex = 0, numOpArgs = resultOp.getNumArgs();
argIndex != numOpArgs; ++argIndex) {
// Start each argment on its own line.
// Start each argument on its own line.
(os << ",\n").indent(8);
Argument opArg = resultOp.getArg(argIndex);

View file

@ -687,7 +687,7 @@ static void emitEnumGetSymbolizeFnDefn(const EnumAttr &enumAttr,
}
static bool emitOpUtils(const RecordKeeper &recordKeeper, raw_ostream &os) {
llvm::emitSourceFileHeader("SPIR-V Op Utilites", os);
llvm::emitSourceFileHeader("SPIR-V Op Utilities", os);
auto defs = recordKeeper.getAllDerivedDefinitions("EnumAttrInfo");
os << "#ifndef SPIRV_OP_UTILS_H_\n";

View file

@ -109,7 +109,7 @@ TEST(StructsGenTest, ClassofMissingFalse) {
llvm::SmallVector<mlir::NamedAttribute, 3> newValues(
expectedValues.begin() + 1, expectedValues.end());
// Make a new DictionaryAttr and validate it is not a validte TestStruct.
// Make a new DictionaryAttr and validate it is not a validate TestStruct.
auto badDictionary = mlir::DictionaryAttr::get(newValues, &context);
ASSERT_FALSE(test::TestStruct::classof(badDictionary));
}