diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index a4656dd415bbd..71b20cb0946c4 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -14,7 +14,7 @@ links to the major sections: If you have questions, please make a post on [internals.rust-lang.org][internals] or hop on [#rust-internals][pound-rust-internals]. -As a reminder, all contributors are expected to follow our [Code of Conduct](coc). +As a reminder, all contributors are expected to follow our [Code of Conduct][coc]. [pound-rust-internals]: http://chat.mibbit.com/?server=irc.mozilla.org&channel=%23rust-internals [internals]: http://internals.rust-lang.org diff --git a/RELEASES.md b/RELEASES.md index 09929eee9230d..69b804cf24c40 100644 --- a/RELEASES.md +++ b/RELEASES.md @@ -51,7 +51,7 @@ Version 1.0.0-alpha.2 (February 2015) * Abstract [OS-specific string types][osstr], `std::ff::{OsString, OsStr}`, provide strings in platform-specific encodings for easier interop with system APIs. [RFC][osstr-rfc]. - * The `boxed::into_raw` and `Box::frow_raw` functions [convert + * The `boxed::into_raw` and `Box::from_raw` functions [convert between `Box` and `*mut T`][boxraw], a common pattern for creating raw pointers. diff --git a/src/doc/trpl/guessing-game.md b/src/doc/trpl/guessing-game.md index 01f270f19512a..a40374fe30fad 100644 --- a/src/doc/trpl/guessing-game.md +++ b/src/doc/trpl/guessing-game.md @@ -91,7 +91,7 @@ fn main() { ``` You've seen this code before, when we talked about standard input. We -import the `std::io` module with `use`, and then our `main` function contains +import the `std::old_io` module with `use`, and then our `main` function contains our program's logic. We print a little message announcing the game, ask the user to input a guess, get their input, and then print it out. diff --git a/src/doc/trpl/if.md b/src/doc/trpl/if.md index a350df67b1759..7dac49987d849 100644 --- a/src/doc/trpl/if.md +++ b/src/doc/trpl/if.md @@ -34,6 +34,20 @@ if x == 5 { } ``` +If there is more than one case, use an `else if`: + +```rust +let x = 5; + +if x == 5 { + println!("x is five!"); +} else if x == 6 { + println!("x is six!"); +} else { + println!("x is not five or six :("); +} +``` + This is all pretty standard. However, you can also do this: diff --git a/src/doc/trpl/pointers.md b/src/doc/trpl/pointers.md index 9c649cd2273f8..332f299a67f3c 100644 --- a/src/doc/trpl/pointers.md +++ b/src/doc/trpl/pointers.md @@ -687,7 +687,9 @@ than the hundred `int`s that make up the `BigStruct`. This is an antipattern in Rust. Instead, write this: -```{rust} +```rust +#![feature(box_syntax)] + struct BigStruct { one: i32, two: i32, @@ -706,10 +708,13 @@ fn main() { one_hundred: 100, }); - let y = Box::new(foo(x)); + let y = box foo(x); } ``` +Note that this uses the `box_syntax` feature gate, so this syntax may change in +the future. + This gives you flexibility without sacrificing performance. You may think that this gives us terrible performance: return a value and then diff --git a/src/doc/trpl/static-and-dynamic-dispatch.md b/src/doc/trpl/static-and-dynamic-dispatch.md index 9421dac7bf65d..98dac9bf84bb4 100644 --- a/src/doc/trpl/static-and-dynamic-dispatch.md +++ b/src/doc/trpl/static-and-dynamic-dispatch.md @@ -84,7 +84,7 @@ inlining and hence usually higher performance. It also has some downsides: causing code bloat due to many copies of the same function existing in the binary, one for each type. -Furthermore, compilers aren’t perfect and may “optimise” code to become slower. +Furthermore, compilers aren’t perfect and may “optimize” code to become slower. For example, functions inlined too eagerly will bloat the instruction cache (cache rules everything around us). This is part of the reason that `#[inline]` and `#[inline(always)]` should be used carefully, and one reason why using a @@ -104,7 +104,7 @@ objects, like `&Foo` or `Box`, are normal values that store a value of known at runtime. The methods of the trait can be called on a trait object via a special record of function pointers (created and managed by the compiler). -A function that takes a trait object is not specialised to each of the types +A function that takes a trait object is not specialized to each of the types that implements `Foo`: only one copy is generated, often (but not always) resulting in less code bloat. However, this comes at the cost of requiring slower virtual function calls, and effectively inhibiting any chance of @@ -112,7 +112,7 @@ inlining and related optimisations from occurring. Trait objects are both simple and complicated: their core representation and layout is quite straight-forward, but there are some curly error messages and -surprising behaviours to discover. +surprising behaviors to discover. ### Obtaining a trait object @@ -140,13 +140,13 @@ and casts are identical. This operation can be seen as "erasing" the compiler's knowledge about the specific type of the pointer, and hence trait objects are sometimes referred to -"type erasure". +as "type erasure". ### Representation Let's start simple, with the runtime representation of a trait object. The `std::raw` module contains structs with layouts that are the same as the -complicated build-in types, [including trait objects][stdraw]: +complicated built-in types, [including trait objects][stdraw]: ```rust # mod foo { @@ -223,14 +223,14 @@ static Foo_for_String_vtable: FooVtable = FooVtable { The `destructor` field in each vtable points to a function that will clean up any resources of the vtable's type, for `u8` it is trivial, but for `String` it will free the memory. This is necessary for owning trait objects like -`Box`, which need to clean-up both the `Box` allocation and as well as the +`Box`, which need to clean-up both the `Box` allocation as well as the internal type when they go out of scope. The `size` and `align` fields store the size of the erased type, and its alignment requirements; these are essentially unused at the moment since the information is embedded in the -destructor, but will be used in future, as trait objects are progressively made -more flexible. +destructor, but will be used in the future, as trait objects are progressively +made more flexible. -Suppose we've got some values that implement `Foo`, the explicit form of +Suppose we've got some values that implement `Foo`, then the explicit form of construction and use of `Foo` trait objects might look a bit like (ignoring the type mismatches: they're all just pointers anyway): diff --git a/src/liballoc/arc.rs b/src/liballoc/arc.rs index 934e6ab215916..cc49164ef91b3 100644 --- a/src/liballoc/arc.rs +++ b/src/liballoc/arc.rs @@ -598,13 +598,6 @@ impl Default for Arc { fn default() -> Arc { Arc::new(Default::default()) } } -#[cfg(stage0)] -impl> Hash for Arc { - fn hash(&self, state: &mut H) { - (**self).hash(state) - } -} -#[cfg(not(stage0))] #[stable(feature = "rust1", since = "1.0.0")] impl Hash for Arc { fn hash(&self, state: &mut H) { diff --git a/src/liballoc/boxed.rs b/src/liballoc/boxed.rs index a3516bd667b7a..ce889c796012c 100644 --- a/src/liballoc/boxed.rs +++ b/src/liballoc/boxed.rs @@ -220,14 +220,6 @@ impl Ord for Box { #[stable(feature = "rust1", since = "1.0.0")] impl Eq for Box {} -#[cfg(stage0)] -impl> Hash for Box { - #[inline] - fn hash(&self, state: &mut S) { - (**self).hash(state); - } -} -#[cfg(not(stage0))] #[stable(feature = "rust1", since = "1.0.0")] impl Hash for Box { fn hash(&self, state: &mut H) { diff --git a/src/liballoc/lib.rs b/src/liballoc/lib.rs index bc349ebebdeed..0cdc71b6f604f 100644 --- a/src/liballoc/lib.rs +++ b/src/liballoc/lib.rs @@ -73,6 +73,7 @@ #![feature(unboxed_closures)] #![feature(unsafe_no_drop_flag)] #![feature(core)] +#![cfg_attr(test, feature(test, alloc, rustc_private))] #![cfg_attr(all(not(feature = "external_funcs"), not(feature = "external_crate")), feature(libc))] diff --git a/src/liballoc/rc.rs b/src/liballoc/rc.rs index 9d39511543188..ed7d34de7a688 100644 --- a/src/liballoc/rc.rs +++ b/src/liballoc/rc.rs @@ -592,14 +592,6 @@ impl Ord for Rc { } // FIXME (#18248) Make `T` `Sized?` -#[cfg(stage0)] -impl> Hash for Rc { - #[inline] - fn hash(&self, state: &mut S) { - (**self).hash(state); - } -} -#[cfg(not(stage0))] #[stable(feature = "rust1", since = "1.0.0")] impl Hash for Rc { fn hash(&self, state: &mut H) { diff --git a/src/libcollections/bit.rs b/src/libcollections/bit.rs index 11c576eab1525..21218201182f9 100644 --- a/src/libcollections/bit.rs +++ b/src/libcollections/bit.rs @@ -985,17 +985,6 @@ impl fmt::Debug for BitVec { } #[stable(feature = "rust1", since = "1.0.0")] -#[cfg(stage0)] -impl hash::Hash for BitVec { - fn hash(&self, state: &mut S) { - self.nbits.hash(state); - for elem in self.blocks() { - elem.hash(state); - } - } -} -#[stable(feature = "rust1", since = "1.0.0")] -#[cfg(not(stage0))] impl hash::Hash for BitVec { fn hash(&self, state: &mut H) { self.nbits.hash(state); @@ -1776,16 +1765,7 @@ impl fmt::Debug for BitSet { } } -#[cfg(stage0)] -impl hash::Hash for BitSet { - fn hash(&self, state: &mut S) { - for pos in self { - pos.hash(state); - } - } -} #[stable(feature = "rust1", since = "1.0.0")] -#[cfg(not(stage0))] impl hash::Hash for BitSet { fn hash(&self, state: &mut H) { for pos in self { diff --git a/src/libcollections/borrow.rs b/src/libcollections/borrow.rs index 901d7a73b51ed..e92f38741c9a0 100644 --- a/src/libcollections/borrow.rs +++ b/src/libcollections/borrow.rs @@ -282,16 +282,6 @@ impl<'a, B: ?Sized> fmt::Display for Cow<'a, B> where } #[stable(feature = "rust1", since = "1.0.0")] -#[cfg(stage0)] -impl<'a, B: ?Sized, S: Hasher> Hash for Cow<'a, B> where B: Hash + ToOwned -{ - #[inline] - fn hash(&self, state: &mut S) { - Hash::hash(&**self, state) - } -} -#[stable(feature = "rust1", since = "1.0.0")] -#[cfg(not(stage0))] impl<'a, B: ?Sized> Hash for Cow<'a, B> where B: Hash + ToOwned { #[inline] diff --git a/src/libcollections/borrow_stage0.rs b/src/libcollections/borrow_stage0.rs deleted file mode 100644 index c1d74b16ce6bc..0000000000000 --- a/src/libcollections/borrow_stage0.rs +++ /dev/null @@ -1,313 +0,0 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT -// file at the top-level directory of this distribution and at -// http://rust-lang.org/COPYRIGHT. -// -// Licensed under the Apache License, Version 2.0 or the MIT license -// , at your -// option. This file may not be copied, modified, or distributed -// except according to those terms. - -//! A module for working with borrowed data. - -#![stable(feature = "rust1", since = "1.0.0")] - -use core::clone::Clone; -use core::cmp::{Eq, Ord, Ordering, PartialEq, PartialOrd}; -use core::hash::{Hash, Hasher}; -use core::marker::Sized; -use core::ops::Deref; -use core::option::Option; - -use fmt; -use alloc::{rc, arc}; - -use self::Cow::*; - -/// A trait for borrowing data. -/// -/// In general, there may be several ways to "borrow" a piece of data. The -/// typical ways of borrowing a type `T` are `&T` (a shared borrow) and `&mut T` -/// (a mutable borrow). But types like `Vec` provide additional kinds of -/// borrows: the borrowed slices `&[T]` and `&mut [T]`. -/// -/// When writing generic code, it is often desirable to abstract over all ways -/// of borrowing data from a given type. That is the role of the `Borrow` -/// trait: if `T: Borrow`, then `&U` can be borrowed from `&T`. A given -/// type can be borrowed as multiple different types. In particular, `Vec: -/// Borrow>` and `Vec: Borrow<[T]>`. -#[stable(feature = "rust1", since = "1.0.0")] -pub trait Borrow { - /// Immutably borrow from an owned value. - #[stable(feature = "rust1", since = "1.0.0")] - fn borrow(&self) -> &Borrowed; -} - -/// A trait for mutably borrowing data. -/// -/// Similar to `Borrow`, but for mutable borrows. -#[stable(feature = "rust1", since = "1.0.0")] -pub trait BorrowMut : Borrow { - /// Mutably borrow from an owned value. - #[stable(feature = "rust1", since = "1.0.0")] - fn borrow_mut(&mut self) -> &mut Borrowed; -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl Borrow for T { - fn borrow(&self) -> &T { self } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl BorrowMut for T { - fn borrow_mut(&mut self) -> &mut T { self } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, T: ?Sized> Borrow for &'a T { - fn borrow(&self) -> &T { &**self } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, T: ?Sized> Borrow for &'a mut T { - fn borrow(&self) -> &T { &**self } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, T: ?Sized> BorrowMut for &'a mut T { - fn borrow_mut(&mut self) -> &mut T { &mut **self } -} - -impl Borrow for rc::Rc { - fn borrow(&self) -> &T { &**self } -} - -impl Borrow for arc::Arc { - fn borrow(&self) -> &T { &**self } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, B: ?Sized> Borrow for Cow<'a, B> where B: ToOwned, ::Owned: 'a { - fn borrow(&self) -> &B { - &**self - } -} - -/// A generalization of Clone to borrowed data. -/// -/// Some types make it possible to go from borrowed to owned, usually by -/// implementing the `Clone` trait. But `Clone` works only for going from `&T` -/// to `T`. The `ToOwned` trait generalizes `Clone` to construct owned data -/// from any borrow of a given type. -#[stable(feature = "rust1", since = "1.0.0")] -pub trait ToOwned { - #[stable(feature = "rust1", since = "1.0.0")] - type Owned: Borrow; - - /// Create owned data from borrowed data, usually by copying. - #[stable(feature = "rust1", since = "1.0.0")] - fn to_owned(&self) -> Self::Owned; -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl ToOwned for T where T: Clone { - type Owned = T; - fn to_owned(&self) -> T { self.clone() } -} - -/// A clone-on-write smart pointer. -/// -/// The type `Cow` is a smart pointer providing clone-on-write functionality: it -/// can enclose and provide immutable access to borrowed data, and clone the -/// data lazily when mutation or ownership is required. The type is designed to -/// work with general borrowed data via the `Borrow` trait. -/// -/// `Cow` implements both `Deref`, which means that you can call -/// non-mutating methods directly on the data it encloses. If mutation -/// is desired, `to_mut` will obtain a mutable references to an owned -/// value, cloning if necessary. -/// -/// # Example -/// -/// ```rust -/// use std::borrow::Cow; -/// -/// fn abs_all(input: &mut Cow<[int]>) { -/// for i in 0..input.len() { -/// let v = input[i]; -/// if v < 0 { -/// // clones into a vector the first time (if not already owned) -/// input.to_mut()[i] = -v; -/// } -/// } -/// } -/// ``` -#[stable(feature = "rust1", since = "1.0.0")] -pub enum Cow<'a, B: ?Sized + 'a> where B: ToOwned { - /// Borrowed data. - #[stable(feature = "rust1", since = "1.0.0")] - Borrowed(&'a B), - - /// Owned data. - #[stable(feature = "rust1", since = "1.0.0")] - Owned(::Owned) -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, B: ?Sized> Clone for Cow<'a, B> where B: ToOwned { - fn clone(&self) -> Cow<'a, B> { - match *self { - Borrowed(b) => Borrowed(b), - Owned(ref o) => { - let b: &B = o.borrow(); - Owned(b.to_owned()) - }, - } - } -} - -impl<'a, B: ?Sized> Cow<'a, B> where B: ToOwned, ::Owned: 'a { - /// Acquire a mutable reference to the owned form of the data. - /// - /// Copies the data if it is not already owned. - #[stable(feature = "rust1", since = "1.0.0")] - pub fn to_mut(&mut self) -> &mut ::Owned where ::Owned: 'a { - match *self { - Borrowed(borrowed) => { - *self = Owned(borrowed.to_owned()); - self.to_mut() - } - Owned(ref mut owned) => owned - } - } - - /// Extract the owned data. - /// - /// Copies the data if it is not already owned. - #[stable(feature = "rust1", since = "1.0.0")] - pub fn into_owned(self) -> ::Owned { - match self { - Borrowed(borrowed) => borrowed.to_owned(), - Owned(owned) => owned - } - } - - /// Returns true if this `Cow` wraps a borrowed value - #[deprecated(since = "1.0.0", reason = "match on the enum instead")] - #[unstable(feature = "std_misc")] - pub fn is_borrowed(&self) -> bool { - match *self { - Borrowed(_) => true, - _ => false, - } - } - - /// Returns true if this `Cow` wraps an owned value - #[deprecated(since = "1.0.0", reason = "match on the enum instead")] - #[unstable(feature = "std_misc")] - pub fn is_owned(&self) -> bool { - match *self { - Owned(_) => true, - _ => false, - } - } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, B: ?Sized> Deref for Cow<'a, B> where - B: ToOwned, ::Owned: 'a -{ - type Target = B; - - fn deref(&self) -> &B { - match *self { - Borrowed(borrowed) => borrowed, - Owned(ref owned) => owned.borrow() - } - } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, B: ?Sized> Eq for Cow<'a, B> where B: Eq + ToOwned, ::Owned: 'a {} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, B: ?Sized> Ord for Cow<'a, B> where - B: Ord + ToOwned, ::Owned: 'a -{ - #[inline] - fn cmp(&self, other: &Cow<'a, B>) -> Ordering { - Ord::cmp(&**self, &**other) - } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, 'b, B: ?Sized, C: ?Sized> PartialEq> for Cow<'a, B> where - B: PartialEq + ToOwned, C: ToOwned, - ::Owned: 'a, ::Owned: 'b, -{ - #[inline] - fn eq(&self, other: &Cow<'b, C>) -> bool { - PartialEq::eq(&**self, &**other) - } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, B: ?Sized> PartialOrd for Cow<'a, B> where - B: PartialOrd + ToOwned, ::Owned: 'a -{ - #[inline] - fn partial_cmp(&self, other: &Cow<'a, B>) -> Option { - PartialOrd::partial_cmp(&**self, &**other) - } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, B: ?Sized> fmt::Debug for Cow<'a, B> where - B: fmt::Debug + ToOwned, - ::Owned: fmt::Debug, -{ - fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { - match *self { - Borrowed(ref b) => fmt::Debug::fmt(b, f), - Owned(ref o) => fmt::Debug::fmt(o, f), - } - } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, B: ?Sized> fmt::Display for Cow<'a, B> where - B: fmt::Display + ToOwned, - ::Owned: fmt::Display, -{ - fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { - match *self { - Borrowed(ref b) => fmt::Display::fmt(b, f), - Owned(ref o) => fmt::Display::fmt(o, f), - } - } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, B: ?Sized, S: Hasher> Hash for Cow<'a, B> where - B: Hash + ToOwned, ::Owned: 'a -{ - #[inline] - fn hash(&self, state: &mut S) { - Hash::hash(&**self, state) - } -} - -/// Trait for moving into a `Cow` -#[stable(feature = "rust1", since = "1.0.0")] -pub trait IntoCow<'a, B: ?Sized> where B: ToOwned { - /// Moves `self` into `Cow` - #[stable(feature = "rust1", since = "1.0.0")] - fn into_cow(self) -> Cow<'a, B>; -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, B: ?Sized> IntoCow<'a, B> for Cow<'a, B> where B: ToOwned { - fn into_cow(self) -> Cow<'a, B> { - self - } -} diff --git a/src/libcollections/btree/map.rs b/src/libcollections/btree/map.rs index 7823f536c7a21..1b456eec830b1 100644 --- a/src/libcollections/btree/map.rs +++ b/src/libcollections/btree/map.rs @@ -852,16 +852,6 @@ impl Extend<(K, V)> for BTreeMap { } } -#[cfg(stage0)] -#[stable(feature = "rust1", since = "1.0.0")] -impl, V: Hash> Hash for BTreeMap { - fn hash(&self, state: &mut S) { - for elt in self { - elt.hash(state); - } - } -} -#[cfg(not(stage0))] #[stable(feature = "rust1", since = "1.0.0")] impl Hash for BTreeMap { fn hash(&self, state: &mut H) { diff --git a/src/libcollections/lib.rs b/src/libcollections/lib.rs index 6569ab9c05acd..1f3c54a4cb53f 100644 --- a/src/libcollections/lib.rs +++ b/src/libcollections/lib.rs @@ -86,23 +86,17 @@ mod macros; pub mod binary_heap; mod bit; mod btree; -pub mod linked_list; +pub mod borrow; pub mod enum_set; pub mod fmt; -pub mod vec_deque; +pub mod linked_list; pub mod slice; pub mod str; pub mod string; pub mod vec; +pub mod vec_deque; pub mod vec_map; -#[cfg(stage0)] -#[path = "borrow_stage0.rs"] -pub mod borrow; - -#[cfg(not(stage0))] -pub mod borrow; - #[unstable(feature = "collections", reason = "RFC 509")] pub mod bit_vec { diff --git a/src/libcollections/linked_list.rs b/src/libcollections/linked_list.rs index c142819a51896..3c61fc3da90e3 100644 --- a/src/libcollections/linked_list.rs +++ b/src/libcollections/linked_list.rs @@ -28,8 +28,6 @@ use core::cmp::Ordering; use core::default::Default; use core::fmt; use core::hash::{Hasher, Hash}; -#[cfg(stage0)] -use core::hash::Writer; use core::iter::{self, FromIterator, IntoIterator}; use core::mem; use core::ptr; @@ -932,17 +930,6 @@ impl fmt::Debug for LinkedList { } #[stable(feature = "rust1", since = "1.0.0")] -#[cfg(stage0)] -impl> Hash for LinkedList { - fn hash(&self, state: &mut S) { - self.len().hash(state); - for elt in self { - elt.hash(state); - } - } -} -#[stable(feature = "rust1", since = "1.0.0")] -#[cfg(not(stage0))] impl Hash for LinkedList { fn hash(&self, state: &mut H) { self.len().hash(state); diff --git a/src/libcollections/string.rs b/src/libcollections/string.rs index 3b179d0b94c97..6c2624cd204de 100644 --- a/src/libcollections/string.rs +++ b/src/libcollections/string.rs @@ -834,16 +834,7 @@ impl fmt::Debug for String { } } -#[unstable(feature = "collections", reason = "waiting on Hash stabilization")] -#[cfg(stage0)] -impl hash::Hash for String { - #[inline] - fn hash(&self, hasher: &mut H) { - (**self).hash(hasher) - } -} #[stable(feature = "rust1", since = "1.0.0")] -#[cfg(not(stage0))] impl hash::Hash for String { #[inline] fn hash(&self, hasher: &mut H) { diff --git a/src/libcollections/vec.rs b/src/libcollections/vec.rs index 1cc2a5235abec..2f9577c08deba 100644 --- a/src/libcollections/vec.rs +++ b/src/libcollections/vec.rs @@ -1303,15 +1303,7 @@ impl Clone for Vec { } } -#[cfg(stage0)] -impl> Hash for Vec { - #[inline] - fn hash(&self, state: &mut S) { - Hash::hash(&**self, state) - } -} #[stable(feature = "rust1", since = "1.0.0")] -#[cfg(not(stage0))] impl Hash for Vec { #[inline] fn hash(&self, state: &mut H) { @@ -1599,9 +1591,7 @@ impl AsSlice for Vec { fn as_slice(&self) -> &[T] { unsafe { let p = *self.ptr; - if cfg!(not(stage0)) { // NOTE remove cfg after next snapshot - assume(p != 0 as *mut T); - } + assume(p != 0 as *mut T); mem::transmute(RawSlice { data: p, len: self.len diff --git a/src/libcollections/vec_deque.rs b/src/libcollections/vec_deque.rs index 3ba22a41ff740..f65e644fa5284 100644 --- a/src/libcollections/vec_deque.rs +++ b/src/libcollections/vec_deque.rs @@ -32,7 +32,6 @@ use core::ptr::{self, Unique}; use core::raw::Slice as RawSlice; use core::hash::{Hash, Hasher}; -#[cfg(stage0)] use core::hash::Writer; use core::cmp; use alloc::heap; @@ -1675,17 +1674,6 @@ impl Ord for VecDeque { } #[stable(feature = "rust1", since = "1.0.0")] -#[cfg(stage0)] -impl> Hash for VecDeque { - fn hash(&self, state: &mut S) { - self.len().hash(state); - for elt in self { - elt.hash(state); - } - } -} -#[stable(feature = "rust1", since = "1.0.0")] -#[cfg(not(stage0))] impl Hash for VecDeque { fn hash(&self, state: &mut H) { self.len().hash(state); diff --git a/src/libcollections/vec_map.rs b/src/libcollections/vec_map.rs index 54589a3142345..d59e3c70c39ba 100644 --- a/src/libcollections/vec_map.rs +++ b/src/libcollections/vec_map.rs @@ -21,7 +21,6 @@ use core::cmp::Ordering; use core::default::Default; use core::fmt; use core::hash::{Hash, Hasher}; -#[cfg(stage0)] use core::hash::Writer; use core::iter::{Enumerate, FilterMap, Map, FromIterator, IntoIterator}; use core::iter; use core::mem::replace; @@ -113,21 +112,7 @@ impl Clone for VecMap { } } -#[cfg(stage0)] -impl> Hash for VecMap { - fn hash(&self, state: &mut S) { - // In order to not traverse the `VecMap` twice, count the elements - // during iteration. - let mut count: usize = 0; - for elt in self { - elt.hash(state); - count += 1; - } - count.hash(state); - } -} #[stable(feature = "rust1", since = "1.0.0")] -#[cfg(not(stage0))] impl Hash for VecMap { fn hash(&self, state: &mut H) { // In order to not traverse the `VecMap` twice, count the elements diff --git a/src/libcore/array.rs b/src/libcore/array.rs index afb5d95c9f8d7..e8f6e31756df9 100644 --- a/src/libcore/array.rs +++ b/src/libcore/array.rs @@ -35,13 +35,6 @@ macro_rules! array_impls { } } - #[cfg(stage0)] - impl> Hash for [T; $N] { - fn hash(&self, state: &mut S) { - Hash::hash(&self[..], state) - } - } - #[cfg(not(stage0))] #[stable(feature = "rust1", since = "1.0.0")] impl Hash for [T; $N] { fn hash(&self, state: &mut H) { diff --git a/src/libcore/atomic.rs b/src/libcore/atomic.rs index 6afe5b2257d27..38e2bd98ef9e3 100644 --- a/src/libcore/atomic.rs +++ b/src/libcore/atomic.rs @@ -15,7 +15,7 @@ //! types. //! //! This module defines atomic versions of a select number of primitive -//! types, including `AtomicBool`, `AtomicIsize`, `AtomicUsize`, and `AtomicOption`. +//! types, including `AtomicBool`, `AtomicIsize`, and `AtomicUsize`. //! Atomic types present operations that, when used correctly, synchronize //! updates between threads. //! diff --git a/src/libcore/hash/mod.rs b/src/libcore/hash/mod.rs index 2e83334b93732..ed48903a7c255 100644 --- a/src/libcore/hash/mod.rs +++ b/src/libcore/hash/mod.rs @@ -73,7 +73,6 @@ mod sip; /// to compute the hash. Specific implementations of this trait may specialize /// for particular instances of `H` in order to be able to optimize the hashing /// behavior. -#[cfg(not(stage0))] #[stable(feature = "rust1", since = "1.0.0")] pub trait Hash { /// Feeds this value into the state given, updating the hasher as necessary. @@ -89,72 +88,40 @@ pub trait Hash { } } -/// A hashable type. -/// -/// The `H` type parameter is an abstract hash state that is used by the `Hash` -/// to compute the hash. Specific implementations of this trait may specialize -/// for particular instances of `H` in order to be able to optimize the hashing -/// behavior. -#[cfg(stage0)] -pub trait Hash { - /// Feeds this value into the state given, updating the hasher as necessary. - fn hash(&self, state: &mut H); -} - /// A trait which represents the ability to hash an arbitrary stream of bytes. #[stable(feature = "rust1", since = "1.0.0")] pub trait Hasher { - /// Result type of one run of hashing generated by this hasher. - #[cfg(stage0)] - type Output; - - /// Resets this hasher back to its initial state (as if it were just - /// created). - #[cfg(stage0)] - fn reset(&mut self); - - /// Completes a round of hashing, producing the output hash generated. - #[cfg(stage0)] - fn finish(&self) -> Self::Output; - /// Completes a round of hashing, producing the output hash generated. - #[cfg(not(stage0))] #[unstable(feature = "hash", reason = "module was recently redesigned")] fn finish(&self) -> u64; /// Writes some data into this `Hasher` - #[cfg(not(stage0))] #[stable(feature = "rust1", since = "1.0.0")] fn write(&mut self, bytes: &[u8]); /// Write a single `u8` into this hasher - #[cfg(not(stage0))] #[inline] #[unstable(feature = "hash", reason = "module was recently redesigned")] fn write_u8(&mut self, i: u8) { self.write(&[i]) } /// Write a single `u16` into this hasher. - #[cfg(not(stage0))] #[inline] #[unstable(feature = "hash", reason = "module was recently redesigned")] fn write_u16(&mut self, i: u16) { self.write(&unsafe { mem::transmute::<_, [u8; 2]>(i) }) } /// Write a single `u32` into this hasher. - #[cfg(not(stage0))] #[inline] #[unstable(feature = "hash", reason = "module was recently redesigned")] fn write_u32(&mut self, i: u32) { self.write(&unsafe { mem::transmute::<_, [u8; 4]>(i) }) } /// Write a single `u64` into this hasher. - #[cfg(not(stage0))] #[inline] #[unstable(feature = "hash", reason = "module was recently redesigned")] fn write_u64(&mut self, i: u64) { self.write(&unsafe { mem::transmute::<_, [u8; 8]>(i) }) } /// Write a single `usize` into this hasher. - #[cfg(not(stage0))] #[inline] #[unstable(feature = "hash", reason = "module was recently redesigned")] fn write_usize(&mut self, i: usize) { @@ -166,58 +133,31 @@ pub trait Hasher { } /// Write a single `i8` into this hasher. - #[cfg(not(stage0))] #[inline] #[unstable(feature = "hash", reason = "module was recently redesigned")] fn write_i8(&mut self, i: i8) { self.write_u8(i as u8) } /// Write a single `i16` into this hasher. - #[cfg(not(stage0))] #[inline] #[unstable(feature = "hash", reason = "module was recently redesigned")] fn write_i16(&mut self, i: i16) { self.write_u16(i as u16) } /// Write a single `i32` into this hasher. - #[cfg(not(stage0))] #[inline] #[unstable(feature = "hash", reason = "module was recently redesigned")] fn write_i32(&mut self, i: i32) { self.write_u32(i as u32) } /// Write a single `i64` into this hasher. - #[cfg(not(stage0))] #[inline] #[unstable(feature = "hash", reason = "module was recently redesigned")] fn write_i64(&mut self, i: i64) { self.write_u64(i as u64) } /// Write a single `isize` into this hasher. - #[cfg(not(stage0))] #[inline] #[unstable(feature = "hash", reason = "module was recently redesigned")] fn write_isize(&mut self, i: isize) { self.write_usize(i as usize) } } -/// A common bound on the `Hasher` parameter to `Hash` implementations in order -/// to generically hash an aggregate. -#[unstable(feature = "hash", - reason = "this trait will likely be replaced by io::Writer")] -#[allow(missing_docs)] -#[cfg(stage0)] -pub trait Writer { - fn write(&mut self, bytes: &[u8]); -} - /// Hash a value with the default SipHasher algorithm (two initial keys of 0). /// /// The specified value will be hashed with this hasher and then the resulting /// hash will be returned. -#[cfg(stage0)] -pub fn hash, H: Hasher + Default>(value: &T) -> H::Output { - let mut h: H = Default::default(); - value.hash(&mut h); - h.finish() -} - -/// Hash a value with the default SipHasher algorithm (two initial keys of 0). -/// -/// The specified value will be hashed with this hasher and then the resulting -/// hash will be returned. -#[cfg(not(stage0))] #[unstable(feature = "hash", reason = "module was recently redesigned")] pub fn hash(value: &T) -> u64 { let mut h: H = Default::default(); @@ -227,145 +167,6 @@ pub fn hash(value: &T) -> u64 { ////////////////////////////////////////////////////////////////////////////// -#[cfg(stage0)] -mod impls { - use prelude::*; - - use mem; - use num::Int; - use super::*; - - macro_rules! impl_hash { - ($ty:ident, $uty:ident) => { - impl Hash for $ty { - #[inline] - fn hash(&self, state: &mut S) { - let a: [u8; ::$ty::BYTES] = unsafe { - mem::transmute(*self) - }; - state.write(&a) - } - } - } - } - - impl_hash! { u8, u8 } - impl_hash! { u16, u16 } - impl_hash! { u32, u32 } - impl_hash! { u64, u64 } - impl_hash! { uint, uint } - impl_hash! { i8, u8 } - impl_hash! { i16, u16 } - impl_hash! { i32, u32 } - impl_hash! { i64, u64 } - impl_hash! { int, uint } - - impl Hash for bool { - #[inline] - fn hash(&self, state: &mut S) { - (*self as u8).hash(state); - } - } - - impl Hash for char { - #[inline] - fn hash(&self, state: &mut S) { - (*self as u32).hash(state); - } - } - - impl Hash for str { - #[inline] - fn hash(&self, state: &mut S) { - state.write(self.as_bytes()); - 0xffu8.hash(state) - } - } - - macro_rules! impl_hash_tuple { - () => ( - impl Hash for () { - #[inline] - fn hash(&self, _state: &mut S) {} - } - ); - - ( $($name:ident)+) => ( - impl),*> Hash for ($($name,)*) { - #[inline] - #[allow(non_snake_case)] - fn hash(&self, state: &mut S) { - match *self { - ($(ref $name,)*) => { - $( - $name.hash(state); - )* - } - } - } - } - ); - } - - impl_hash_tuple! {} - impl_hash_tuple! { A } - impl_hash_tuple! { A B } - impl_hash_tuple! { A B C } - impl_hash_tuple! { A B C D } - impl_hash_tuple! { A B C D E } - impl_hash_tuple! { A B C D E F } - impl_hash_tuple! { A B C D E F G } - impl_hash_tuple! { A B C D E F G H } - impl_hash_tuple! { A B C D E F G H I } - impl_hash_tuple! { A B C D E F G H I J } - impl_hash_tuple! { A B C D E F G H I J K } - impl_hash_tuple! { A B C D E F G H I J K L } - - impl> Hash for [T] { - #[inline] - fn hash(&self, state: &mut S) { - self.len().hash(state); - for elt in self { - elt.hash(state); - } - } - } - - - impl<'a, S: Hasher, T: ?Sized + Hash> Hash for &'a T { - #[inline] - fn hash(&self, state: &mut S) { - (**self).hash(state); - } - } - - impl<'a, S: Hasher, T: ?Sized + Hash> Hash for &'a mut T { - #[inline] - fn hash(&self, state: &mut S) { - (**self).hash(state); - } - } - - impl Hash for *const T { - #[inline] - fn hash(&self, state: &mut S) { - // NB: raw-pointer Hash does _not_ dereference - // to the target; it just gives you the pointer-bytes. - (*self as uint).hash(state); - } - } - - impl Hash for *mut T { - #[inline] - fn hash(&self, state: &mut S) { - // NB: raw-pointer Hash does _not_ dereference - // to the target; it just gives you the pointer-bytes. - (*self as uint).hash(state); - } - } -} - -#[cfg(not(stage0))] mod impls { use prelude::*; diff --git a/src/libcore/hash/sip.rs b/src/libcore/hash/sip.rs index ce8917cc20589..6f24fc7067344 100644 --- a/src/libcore/hash/sip.rs +++ b/src/libcore/hash/sip.rs @@ -16,8 +16,6 @@ use prelude::*; use default::Default; use super::Hasher; -#[cfg(stage0)] -use super::Writer; /// An implementation of SipHash 2-4. /// @@ -175,26 +173,9 @@ impl SipHasher { } } -#[cfg(stage0)] -impl Writer for SipHasher { - #[inline] - fn write(&mut self, msg: &[u8]) { - self.write(msg) - } -} - #[stable(feature = "rust1", since = "1.0.0")] impl Hasher for SipHasher { - #[cfg(stage0)] - type Output = u64; - - #[cfg(stage0)] - fn reset(&mut self) { - self.reset(); - } - #[inline] - #[cfg(not(stage0))] fn write(&mut self, msg: &[u8]) { self.write(msg) } diff --git a/src/libcore/iter.rs b/src/libcore/iter.rs index 8fb10b5b2dc2a..2d50bbb641363 100644 --- a/src/libcore/iter.rs +++ b/src/libcore/iter.rs @@ -2592,7 +2592,29 @@ pub struct RangeStep { rev: bool, } -/// Return an iterator over the range [start, stop) by `step`. It handles overflow by stopping. +/// Return an iterator over the range [start, stop) by `step`. +/// +/// It handles overflow by stopping. +/// +/// # Examples +/// +/// ``` +/// use std::iter::range_step; +/// +/// for i in range_step(0, 10, 2) { +/// println!("{}", i); +/// } +/// ``` +/// +/// This prints: +/// +/// ```text +/// 0 +/// 2 +/// 4 +/// 6 +/// 8 +/// ``` #[inline] #[unstable(feature = "core", reason = "likely to be replaced by range notation and adapters")] @@ -2633,7 +2655,30 @@ pub struct RangeStepInclusive { done: bool, } -/// Return an iterator over the range [start, stop] by `step`. It handles overflow by stopping. +/// Return an iterator over the range [start, stop] by `step`. +/// +/// It handles overflow by stopping. +/// +/// # Examples +/// +/// ``` +/// use std::iter::range_step_inclusive; +/// +/// for i in range_step_inclusive(0, 10, 2) { +/// println!("{}", i); +/// } +/// ``` +/// +/// This prints: +/// +/// ```text +/// 0 +/// 2 +/// 4 +/// 6 +/// 8 +/// 10 +/// ``` #[inline] #[unstable(feature = "core", reason = "likely to be replaced by range notation and adapters")] diff --git a/src/libcore/marker.rs b/src/libcore/marker.rs index d284eb341792b..99385725a99a1 100644 --- a/src/libcore/marker.rs +++ b/src/libcore/marker.rs @@ -31,20 +31,10 @@ use option::Option; use hash::Hash; use hash::Hasher; -/// Types able to be transferred across thread boundaries. -#[unstable(feature = "core", - reason = "will be overhauled with new lifetime rules; see RFC 458")] -#[lang="send"] -#[rustc_on_unimplemented = "`{Self}` cannot be sent between threads safely"] -#[cfg(stage0)] -pub unsafe trait Send: 'static { - // empty. -} /// Types able to be transferred across thread boundaries. #[stable(feature = "rust1", since = "1.0.0")] #[lang="send"] #[rustc_on_unimplemented = "`{Self}` cannot be sent between threads safely"] -#[cfg(not(stage0))] pub unsafe trait Send : MarkerTrait { // empty. } @@ -233,13 +223,6 @@ pub struct Managed; macro_rules! impls{ ($t: ident) => ( - #[cfg(stage0)] - impl Hash for $t { - #[inline] - fn hash(&self, _: &mut S) { - } - } - #[cfg(not(stage0))] impl Hash for $t { #[inline] fn hash(&self, _: &mut H) { @@ -348,14 +331,6 @@ impl MarkerTrait for T { } #[stable(feature = "rust1", since = "1.0.0")] pub trait PhantomFn { } -#[cfg(stage0)] // built into the trait matching system after stage0 -impl PhantomFn for U { } - -/// Specific to stage0. You should not be seeing these docs! -#[cfg(stage0)] -#[lang="covariant_type"] // only relevant to stage0 -pub struct PhantomData; - /// `PhantomData` is a way to tell the compiler about fake fields. /// Phantom data is required whenever type parameters are not used. /// The idea is that if the compiler encounters a `PhantomData` @@ -374,14 +349,12 @@ pub struct PhantomData; /// here! For now, please see [RFC 738][738] for more information. /// /// [738]: https://github.com/rust-lang/rfcs/blob/master/text/0738-variance.md -#[cfg(not(stage0))] #[lang="phantom_data"] #[stable(feature = "rust1", since = "1.0.0")] pub struct PhantomData; impls! { PhantomData } -#[cfg(not(stage0))] mod impls { use super::{Send, Sync, Sized}; @@ -417,7 +390,6 @@ pub struct ContravariantType; #[unstable(feature = "core", reason = "deprecated")] #[deprecated(since = "1.0.0", reason = "Replace with `PhantomData`")] #[lang="covariant_type"] -#[cfg(not(stage0))] pub struct CovariantType; /// Old-style marker trait. Deprecated. diff --git a/src/libcoretest/fmt/num.rs b/src/libcoretest/fmt/num.rs index bc8461b0b9e09..bc3995439a017 100644 --- a/src/libcoretest/fmt/num.rs +++ b/src/libcoretest/fmt/num.rs @@ -170,42 +170,42 @@ mod u32 { use test::Bencher; use core::fmt::radix; use std::rand::{weak_rng, Rng}; - use std::old_io::util::NullWriter; + use std::io::{Write, sink}; #[bench] fn format_bin(b: &mut Bencher) { let mut rng = weak_rng(); - b.iter(|| { write!(&mut NullWriter, "{:b}", rng.gen::()) }) + b.iter(|| { write!(&mut sink(), "{:b}", rng.gen::()) }) } #[bench] fn format_oct(b: &mut Bencher) { let mut rng = weak_rng(); - b.iter(|| { write!(&mut NullWriter, "{:o}", rng.gen::()) }) + b.iter(|| { write!(&mut sink(), "{:o}", rng.gen::()) }) } #[bench] fn format_dec(b: &mut Bencher) { let mut rng = weak_rng(); - b.iter(|| { write!(&mut NullWriter, "{}", rng.gen::()) }) + b.iter(|| { write!(&mut sink(), "{}", rng.gen::()) }) } #[bench] fn format_hex(b: &mut Bencher) { let mut rng = weak_rng(); - b.iter(|| { write!(&mut NullWriter, "{:x}", rng.gen::()) }) + b.iter(|| { write!(&mut sink(), "{:x}", rng.gen::()) }) } #[bench] fn format_show(b: &mut Bencher) { let mut rng = weak_rng(); - b.iter(|| { write!(&mut NullWriter, "{:?}", rng.gen::()) }) + b.iter(|| { write!(&mut sink(), "{:?}", rng.gen::()) }) } #[bench] fn format_base_36(b: &mut Bencher) { let mut rng = weak_rng(); - b.iter(|| { write!(&mut NullWriter, "{}", radix(rng.gen::(), 36)) }) + b.iter(|| { write!(&mut sink(), "{}", radix(rng.gen::(), 36)) }) } } @@ -213,41 +213,41 @@ mod i32 { use test::Bencher; use core::fmt::radix; use std::rand::{weak_rng, Rng}; - use std::old_io::util::NullWriter; + use std::io::{Write, sink}; #[bench] fn format_bin(b: &mut Bencher) { let mut rng = weak_rng(); - b.iter(|| { write!(&mut NullWriter, "{:b}", rng.gen::()) }) + b.iter(|| { write!(&mut sink(), "{:b}", rng.gen::()) }) } #[bench] fn format_oct(b: &mut Bencher) { let mut rng = weak_rng(); - b.iter(|| { write!(&mut NullWriter, "{:o}", rng.gen::()) }) + b.iter(|| { write!(&mut sink(), "{:o}", rng.gen::()) }) } #[bench] fn format_dec(b: &mut Bencher) { let mut rng = weak_rng(); - b.iter(|| { write!(&mut NullWriter, "{}", rng.gen::()) }) + b.iter(|| { write!(&mut sink(), "{}", rng.gen::()) }) } #[bench] fn format_hex(b: &mut Bencher) { let mut rng = weak_rng(); - b.iter(|| { write!(&mut NullWriter, "{:x}", rng.gen::()) }) + b.iter(|| { write!(&mut sink(), "{:x}", rng.gen::()) }) } #[bench] fn format_show(b: &mut Bencher) { let mut rng = weak_rng(); - b.iter(|| { write!(&mut NullWriter, "{:?}", rng.gen::()) }) + b.iter(|| { write!(&mut sink(), "{:?}", rng.gen::()) }) } #[bench] fn format_base_36(b: &mut Bencher) { let mut rng = weak_rng(); - b.iter(|| { write!(&mut NullWriter, "{}", radix(rng.gen::(), 36)) }) + b.iter(|| { write!(&mut sink(), "{}", radix(rng.gen::(), 36)) }) } } diff --git a/src/libcoretest/iter.rs b/src/libcoretest/iter.rs index 39a590c730743..6cbc7bf1bbc51 100644 --- a/src/libcoretest/iter.rs +++ b/src/libcoretest/iter.rs @@ -82,7 +82,7 @@ fn test_iterator_chain() { let xs = [0, 1, 2, 3, 4, 5]; let ys = [30, 40, 50, 60]; let expected = [0, 1, 2, 3, 4, 5, 30, 40, 50, 60]; - let mut it = xs.iter().chain(ys.iter()); + let it = xs.iter().chain(ys.iter()); let mut i = 0; for &x in it { assert_eq!(x, expected[i]); @@ -91,7 +91,7 @@ fn test_iterator_chain() { assert_eq!(i, expected.len()); let ys = count(30, 10).take(4); - let mut it = xs.iter().cloned().chain(ys); + let it = xs.iter().cloned().chain(ys); let mut i = 0; for x in it { assert_eq!(x, expected[i]); @@ -110,7 +110,7 @@ fn test_filter_map() { #[test] fn test_iterator_enumerate() { let xs = [0, 1, 2, 3, 4, 5]; - let mut it = xs.iter().enumerate(); + let it = xs.iter().enumerate(); for (i, &x) in it { assert_eq!(i, x); } @@ -152,7 +152,7 @@ fn test_iterator_peekable() { fn test_iterator_take_while() { let xs = [0, 1, 2, 3, 5, 13, 15, 16, 17, 19]; let ys = [0, 1, 2, 3, 5, 13]; - let mut it = xs.iter().take_while(|&x| *x < 15); + let it = xs.iter().take_while(|&x| *x < 15); let mut i = 0; for x in it { assert_eq!(*x, ys[i]); @@ -165,7 +165,7 @@ fn test_iterator_take_while() { fn test_iterator_skip_while() { let xs = [0, 1, 2, 3, 5, 13, 15, 16, 17, 19]; let ys = [15, 16, 17, 19]; - let mut it = xs.iter().skip_while(|&x| *x < 15); + let it = xs.iter().skip_while(|&x| *x < 15); let mut i = 0; for x in it { assert_eq!(*x, ys[i]); @@ -231,7 +231,7 @@ fn test_iterator_scan() { let xs = [0, 1, 2, 3, 4]; let ys = [0f64, 1.0, 3.0, 6.0, 10.0]; - let mut it = xs.iter().scan(0, add); + let it = xs.iter().scan(0, add); let mut i = 0; for x in it { assert_eq!(x, ys[i]); @@ -244,7 +244,7 @@ fn test_iterator_scan() { fn test_iterator_flat_map() { let xs = [0, 3, 6]; let ys = [0, 1, 2, 3, 4, 5, 6, 7, 8]; - let mut it = xs.iter().flat_map(|&x| count(x, 1).take(3)); + let it = xs.iter().flat_map(|&x| count(x, 1).take(3)); let mut i = 0; for x in it { assert_eq!(x, ys[i]); @@ -279,7 +279,7 @@ fn test_unfoldr() { } } - let mut it = Unfold::new(0, count); + let it = Unfold::new(0, count); let mut i = 0; for counted in it { assert_eq!(counted, i); diff --git a/src/libcoretest/lib.rs b/src/libcoretest/lib.rs index 2dfd81f32c270..03924910e0485 100644 --- a/src/libcoretest/lib.rs +++ b/src/libcoretest/lib.rs @@ -12,6 +12,15 @@ #![feature(int_uint)] #![feature(unboxed_closures)] #![feature(unsafe_destructor)] +#![feature(core)] +#![feature(test)] +#![feature(rand)] +#![feature(unicode)] +#![feature(std_misc)] +#![feature(libc)] +#![feature(hash)] +#![feature(io)] +#![feature(collections)] #![allow(deprecated)] // rand extern crate core; diff --git a/src/libgetopts/lib.rs b/src/libgetopts/lib.rs index fdd7f7395c2b7..4e329897e1ab2 100644 --- a/src/libgetopts/lib.rs +++ b/src/libgetopts/lib.rs @@ -784,7 +784,7 @@ pub fn usage(brief: &str, opts: &[OptGroup]) -> String { // FIXME: #5516 should be graphemes not codepoints // wrapped description - row.push_str(&desc_rows.connect(&desc_sep[..])[]); + row.push_str(&desc_rows.connect(&desc_sep[..])); row }); diff --git a/src/librand/lib.rs b/src/librand/lib.rs index 7588bf7c5158e..583c658dfe058 100644 --- a/src/librand/lib.rs +++ b/src/librand/lib.rs @@ -32,6 +32,8 @@ #![deprecated(reason = "use the crates.io `rand` library instead", since = "1.0.0-alpha")] +#![cfg_attr(test, feature(test, rand))] + #![allow(deprecated)] #[macro_use] diff --git a/src/librbml/lib.rs b/src/librbml/lib.rs index 4af322089d53d..c48dd7a6ee894 100644 --- a/src/librbml/lib.rs +++ b/src/librbml/lib.rs @@ -32,6 +32,8 @@ #![feature(rustc_private)] #![feature(staged_api)] +#![cfg_attr(test, feature(test))] + extern crate serialize; #[macro_use] extern crate log; diff --git a/src/librustc/lint/builtin.rs b/src/librustc/lint/builtin.rs index dc81e89902bb4..1db993fdafd27 100644 --- a/src/librustc/lint/builtin.rs +++ b/src/librustc/lint/builtin.rs @@ -1,4 +1,4 @@ -// Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2012-2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -588,7 +588,7 @@ impl LintPass for RawPointerDerive { } fn check_item(&mut self, cx: &Context, item: &ast::Item) { - if !attr::contains_name(&item.attrs[], "automatically_derived") { + if !attr::contains_name(&item.attrs, "automatically_derived") { return } let did = match item.node { @@ -652,7 +652,7 @@ impl LintPass for UnusedAttributes { if !attr::is_used(attr) { cx.span_lint(UNUSED_ATTRIBUTES, attr.span, "unused attribute"); - if KNOWN_ATTRIBUTES.contains(&(&attr.name()[], AttributeType::CrateLevel)) { + if KNOWN_ATTRIBUTES.contains(&(&attr.name(), AttributeType::CrateLevel)) { let msg = match attr.node.style { ast::AttrOuter => "crate-level attribute should be an inner \ attribute: add an exclamation mark: #![foo]", @@ -732,7 +732,7 @@ impl LintPass for UnusedResults { ty::ty_enum(did, _) => { if ast_util::is_local(did) { if let ast_map::NodeItem(it) = cx.tcx.map.get(did.node) { - warned |= check_must_use(cx, &it.attrs[], s.span); + warned |= check_must_use(cx, &it.attrs, s.span); } } else { let attrs = csearch::get_item_attrs(&cx.sess().cstore, did); @@ -1093,7 +1093,7 @@ impl UnusedParens { if !necessary { cx.span_lint(UNUSED_PARENS, value.span, &format!("unnecessary parentheses around {}", - msg)[]) + msg)) } } @@ -1235,7 +1235,7 @@ impl LintPass for NonShorthandFieldPatterns { if ident.node.as_str() == fieldpat.node.ident.as_str() { cx.span_lint(NON_SHORTHAND_FIELD_PATTERNS, fieldpat.span, &format!("the `{}:` in this pattern is redundant and can \ - be removed", ident.node.as_str())[]) + be removed", ident.node.as_str())) } } } @@ -1269,27 +1269,71 @@ impl LintPass for UnusedUnsafe { } declare_lint! { - UNSAFE_BLOCKS, + UNSAFE_CODE, Allow, - "usage of an `unsafe` block" + "usage of `unsafe` code" } #[derive(Copy)] -pub struct UnsafeBlocks; +pub struct UnsafeCode; -impl LintPass for UnsafeBlocks { +impl LintPass for UnsafeCode { fn get_lints(&self) -> LintArray { - lint_array!(UNSAFE_BLOCKS) + lint_array!(UNSAFE_CODE) } fn check_expr(&mut self, cx: &Context, e: &ast::Expr) { if let ast::ExprBlock(ref blk) = e.node { // Don't warn about generated blocks, that'll just pollute the output. if blk.rules == ast::UnsafeBlock(ast::UserProvided) { - cx.span_lint(UNSAFE_BLOCKS, blk.span, "usage of an `unsafe` block"); + cx.span_lint(UNSAFE_CODE, blk.span, "usage of an `unsafe` block"); } } } + + fn check_item(&mut self, cx: &Context, it: &ast::Item) { + use syntax::ast::Unsafety::Unsafe; + + fn check_method(cx: &Context, meth: &P) { + if let ast::Method_::MethDecl(_, _, _, _, Unsafe, _, _, _) = meth.node { + cx.span_lint(UNSAFE_CODE, meth.span, "implementation of an `unsafe` method"); + } + } + + match it.node { + ast::ItemFn(_, Unsafe, _, _, _) => + cx.span_lint(UNSAFE_CODE, it.span, "declaration of an `unsafe` function"), + + ast::ItemTrait(trait_safety, _, _, ref items) => { + if trait_safety == Unsafe { + cx.span_lint(UNSAFE_CODE, it.span, "declaration of an `unsafe` trait"); + } + + for it in items { + match *it { + ast::RequiredMethod(ast::TypeMethod { unsafety: Unsafe, span, ..}) => + cx.span_lint(UNSAFE_CODE, span, "declaration of an `unsafe` method"), + ast::ProvidedMethod(ref meth) => check_method(cx, meth), + _ => (), + } + } + }, + + ast::ItemImpl(impl_safety, _, _, _, _, ref impls) => { + if impl_safety == Unsafe { + cx.span_lint(UNSAFE_CODE, it.span, "implementation of an `unsafe` trait"); + } + + for item in impls { + if let ast::ImplItem::MethodImplItem(ref meth) = *item { + check_method(cx, meth); + } + } + }, + + _ => return, + } + } } declare_lint! { @@ -1339,7 +1383,7 @@ impl LintPass for UnusedMut { fn check_expr(&mut self, cx: &Context, e: &ast::Expr) { if let ast::ExprMatch(_, ref arms, _) = e.node { for a in arms { - self.check_unused_mut_pat(cx, &a.pats[]) + self.check_unused_mut_pat(cx, &a.pats) } } } @@ -1460,7 +1504,7 @@ impl MissingDoc { }); if !has_doc { cx.span_lint(MISSING_DOCS, sp, - &format!("missing documentation for {}", desc)[]); + &format!("missing documentation for {}", desc)); } } } @@ -1496,7 +1540,7 @@ impl LintPass for MissingDoc { } fn check_crate(&mut self, cx: &Context, krate: &ast::Crate) { - self.check_missing_docs_attrs(cx, None, &krate.attrs[], + self.check_missing_docs_attrs(cx, None, &krate.attrs, krate.span, "crate"); } @@ -1510,7 +1554,7 @@ impl LintPass for MissingDoc { ast::ItemTy(..) => "a type alias", _ => return }; - self.check_missing_docs_attrs(cx, Some(it.id), &it.attrs[], + self.check_missing_docs_attrs(cx, Some(it.id), &it.attrs, it.span, desc); } @@ -1523,13 +1567,13 @@ impl LintPass for MissingDoc { // Otherwise, doc according to privacy. This will also check // doc for default methods defined on traits. - self.check_missing_docs_attrs(cx, Some(m.id), &m.attrs[], + self.check_missing_docs_attrs(cx, Some(m.id), &m.attrs, m.span, "a method"); } } fn check_ty_method(&mut self, cx: &Context, tm: &ast::TypeMethod) { - self.check_missing_docs_attrs(cx, Some(tm.id), &tm.attrs[], + self.check_missing_docs_attrs(cx, Some(tm.id), &tm.attrs, tm.span, "a type method"); } @@ -1539,14 +1583,14 @@ impl LintPass for MissingDoc { let cur_struct_def = *self.struct_def_stack.last() .expect("empty struct_def_stack"); self.check_missing_docs_attrs(cx, Some(cur_struct_def), - &sf.node.attrs[], sf.span, + &sf.node.attrs, sf.span, "a struct field") } } } fn check_variant(&mut self, cx: &Context, v: &ast::Variant, _: &ast::Generics) { - self.check_missing_docs_attrs(cx, Some(v.node.id), &v.node.attrs[], + self.check_missing_docs_attrs(cx, Some(v.node.id), &v.node.attrs, v.span, "a variant"); assert!(!self.in_variant); self.in_variant = true; diff --git a/src/librustc/lint/context.rs b/src/librustc/lint/context.rs index 068c179d3431f..d344ee8c881c5 100644 --- a/src/librustc/lint/context.rs +++ b/src/librustc/lint/context.rs @@ -1,4 +1,4 @@ -// Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT +// Copyright 2012-2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // @@ -105,7 +105,7 @@ impl LintStore { } pub fn get_lints<'t>(&'t self) -> &'t [(&'static Lint, bool)] { - &self.lints[] + &self.lints } pub fn get_lint_groups<'t>(&'t self) -> Vec<(&'static str, Vec, bool)> { @@ -206,7 +206,7 @@ impl LintStore { UnusedImportBraces, NonShorthandFieldPatterns, UnusedUnsafe, - UnsafeBlocks, + UnsafeCode, UnusedMut, UnusedAllocation, MissingCopyImplementations, @@ -276,7 +276,7 @@ impl LintStore { .collect::>(); } None => sess.err(&format!("unknown {} flag: {}", - level.as_str(), lint_name)[]), + level.as_str(), lint_name)), } } } @@ -527,7 +527,7 @@ impl<'a, 'tcx> Context<'a, 'tcx> { self.tcx.sess.span_err(span, &format!("{}({}) overruled by outer forbid({})", level.as_str(), lint_name, - lint_name)[]); + lint_name)); } else if now != level { let src = self.lints.get_level_source(lint_id).1; self.level_stack.push((lint_id, (now, src))); @@ -562,7 +562,7 @@ impl<'a, 'tcx> Context<'a, 'tcx> { impl<'a, 'tcx, 'v> Visitor<'v> for Context<'a, 'tcx> { fn visit_item(&mut self, it: &ast::Item) { - self.with_lint_attrs(&it.attrs[], |cx| { + self.with_lint_attrs(&it.attrs, |cx| { run_lints!(cx, check_item, it); cx.visit_ids(|v| v.visit_item(it)); visit::walk_item(cx, it); @@ -570,7 +570,7 @@ impl<'a, 'tcx, 'v> Visitor<'v> for Context<'a, 'tcx> { } fn visit_foreign_item(&mut self, it: &ast::ForeignItem) { - self.with_lint_attrs(&it.attrs[], |cx| { + self.with_lint_attrs(&it.attrs, |cx| { run_lints!(cx, check_foreign_item, it); visit::walk_foreign_item(cx, it); }) @@ -595,7 +595,7 @@ impl<'a, 'tcx, 'v> Visitor<'v> for Context<'a, 'tcx> { body: &'v ast::Block, span: Span, id: ast::NodeId) { match fk { visit::FkMethod(_, _, m) => { - self.with_lint_attrs(&m.attrs[], |cx| { + self.with_lint_attrs(&m.attrs, |cx| { run_lints!(cx, check_fn, fk, decl, body, span, id); cx.visit_ids(|v| { v.visit_fn(fk, decl, body, span, id); @@ -611,7 +611,7 @@ impl<'a, 'tcx, 'v> Visitor<'v> for Context<'a, 'tcx> { } fn visit_ty_method(&mut self, t: &ast::TypeMethod) { - self.with_lint_attrs(&t.attrs[], |cx| { + self.with_lint_attrs(&t.attrs, |cx| { run_lints!(cx, check_ty_method, t); visit::walk_ty_method(cx, t); }) @@ -628,14 +628,14 @@ impl<'a, 'tcx, 'v> Visitor<'v> for Context<'a, 'tcx> { } fn visit_struct_field(&mut self, s: &ast::StructField) { - self.with_lint_attrs(&s.node.attrs[], |cx| { + self.with_lint_attrs(&s.node.attrs, |cx| { run_lints!(cx, check_struct_field, s); visit::walk_struct_field(cx, s); }) } fn visit_variant(&mut self, v: &ast::Variant, g: &ast::Generics) { - self.with_lint_attrs(&v.node.attrs[], |cx| { + self.with_lint_attrs(&v.node.attrs, |cx| { run_lints!(cx, check_variant, v, g); visit::walk_variant(cx, v, g); run_lints!(cx, check_variant_post, v, g); @@ -779,7 +779,7 @@ pub fn check_crate(tcx: &ty::ctxt, let mut cx = Context::new(tcx, krate, exported_items); // Visit the whole crate. - cx.with_lint_attrs(&krate.attrs[], |cx| { + cx.with_lint_attrs(&krate.attrs, |cx| { cx.visit_id(ast::CRATE_NODE_ID); cx.visit_ids(|v| { v.visited_outermost = true; diff --git a/src/librustc/lint/mod.rs b/src/librustc/lint/mod.rs index bdcc10ebceca0..021827b0101c8 100644 --- a/src/librustc/lint/mod.rs +++ b/src/librustc/lint/mod.rs @@ -185,14 +185,6 @@ impl PartialEq for LintId { impl Eq for LintId { } -#[cfg(stage0)] -impl hash::Hash for LintId { - fn hash(&self, state: &mut S) { - let ptr = self.lint as *const Lint; - ptr.hash(state); - } -} -#[cfg(not(stage0))] impl hash::Hash for LintId { fn hash(&self, state: &mut H) { let ptr = self.lint as *const Lint; diff --git a/src/librustc/metadata/creader.rs b/src/librustc/metadata/creader.rs index d48a404176ace..4c123b55e8e50 100644 --- a/src/librustc/metadata/creader.rs +++ b/src/librustc/metadata/creader.rs @@ -61,7 +61,7 @@ fn dump_crates(cstore: &CStore) { } fn should_link(i: &ast::Item) -> bool { - !attr::contains_name(&i.attrs[], "no_link") + !attr::contains_name(&i.attrs, "no_link") } struct CrateInfo { @@ -85,7 +85,7 @@ pub fn validate_crate_name(sess: Option<&Session>, s: &str, sp: Option) { for c in s.chars() { if c.is_alphanumeric() { continue } if c == '_' || c == '-' { continue } - err(&format!("invalid character `{}` in crate name: `{}`", c, s)[]); + err(&format!("invalid character `{}` in crate name: `{}`", c, s)); } match sess { Some(sess) => sess.abort_if_errors(), @@ -210,8 +210,8 @@ impl<'a> CrateReader<'a> { match self.extract_crate_info(i) { Some(info) => { let (cnum, _, _) = self.resolve_crate(&None, - &info.ident[], - &info.name[], + &info.ident, + &info.name, None, i.span, PathKind::Crate); @@ -268,7 +268,7 @@ impl<'a> CrateReader<'a> { } else { self.sess.span_err(m.span, &format!("unknown kind: `{}`", - k)[]); + k)); cstore::NativeUnknown } } @@ -413,7 +413,7 @@ impl<'a> CrateReader<'a> { hash: hash.map(|a| &*a), filesearch: self.sess.target_filesearch(kind), target: &self.sess.target.target, - triple: &self.sess.opts.target_triple[], + triple: &self.sess.opts.target_triple, root: root, rejected_via_hash: vec!(), rejected_via_triple: vec!(), @@ -440,8 +440,8 @@ impl<'a> CrateReader<'a> { decoder::get_crate_deps(cdata).iter().map(|dep| { debug!("resolving dep crate {} hash: `{}`", dep.name, dep.hash); let (local_cnum, _, _) = self.resolve_crate(root, - &dep.name[], - &dep.name[], + &dep.name, + &dep.name, Some(&dep.hash), span, PathKind::Dependency); @@ -450,7 +450,7 @@ impl<'a> CrateReader<'a> { } fn read_extension_crate(&mut self, span: Span, info: &CrateInfo) -> ExtensionCrate { - let target_triple = &self.sess.opts.target_triple[]; + let target_triple = &self.sess.opts.target_triple[..]; let is_cross = target_triple != config::host_triple(); let mut should_link = info.should_link && !is_cross; let mut target_only = false; @@ -493,8 +493,8 @@ impl<'a> CrateReader<'a> { PathKind::Crate).is_none(); let metadata = if register { // Register crate now to avoid double-reading metadata - let (_, cmd, _) = self.register_crate(&None, &info.ident[], - &info.name[], span, library); + let (_, cmd, _) = self.register_crate(&None, &info.ident, + &info.name, span, library); PMDSource::Registered(cmd) } else { // Not registering the crate; just hold on to the metadata diff --git a/src/librustc/metadata/csearch.rs b/src/librustc/metadata/csearch.rs index 7eeb0589118fb..38e15af2056be 100644 --- a/src/librustc/metadata/csearch.rs +++ b/src/librustc/metadata/csearch.rs @@ -92,7 +92,7 @@ pub fn get_item_path(tcx: &ty::ctxt, def: ast::DefId) -> Vec // FIXME #1920: This path is not always correct if the crate is not linked // into the root namespace. - let mut r = vec![ast_map::PathMod(token::intern(&cdata.name[]))]; + let mut r = vec![ast_map::PathMod(token::intern(&cdata.name))]; r.push_all(&path); r } @@ -391,7 +391,7 @@ pub fn is_staged_api(cstore: &cstore::CStore, def: ast::DefId) -> bool { let cdata = cstore.get_crate_data(def.krate); let attrs = decoder::get_crate_attributes(cdata.data()); for attr in &attrs { - if &attr.name()[] == "staged_api" { + if &attr.name()[..] == "staged_api" { match attr.node.value.node { ast::MetaWord(_) => return true, _ => (/*pass*/) } } } diff --git a/src/librustc/metadata/decoder.rs b/src/librustc/metadata/decoder.rs index e5576de6e8424..f883c8a1bb918 100644 --- a/src/librustc/metadata/decoder.rs +++ b/src/librustc/metadata/decoder.rs @@ -1225,7 +1225,7 @@ pub fn get_crate_deps(data: &[u8]) -> Vec { } reader::tagged_docs(depsdoc, tag_crate_dep, |depdoc| { let name = docstr(depdoc, tag_crate_dep_crate_name); - let hash = Svh::new(&docstr(depdoc, tag_crate_dep_hash)[]); + let hash = Svh::new(&docstr(depdoc, tag_crate_dep_hash)); deps.push(CrateDep { cnum: crate_num, name: name, diff --git a/src/librustc/metadata/encoder.rs b/src/librustc/metadata/encoder.rs index 42a70cec5dfee..629b19300e62c 100644 --- a/src/librustc/metadata/encoder.rs +++ b/src/librustc/metadata/encoder.rs @@ -94,7 +94,7 @@ fn encode_impl_type_basename(rbml_w: &mut Encoder, name: ast::Ident) { } pub fn encode_def_id(rbml_w: &mut Encoder, id: DefId) { - rbml_w.wr_tagged_str(tag_def_id, &def_to_string(id)[]); + rbml_w.wr_tagged_str(tag_def_id, &def_to_string(id)); } #[derive(Clone)] @@ -273,7 +273,7 @@ fn encode_symbol(ecx: &EncodeContext, } None => { ecx.diag.handler().bug( - &format!("encode_symbol: id not found {}", id)[]); + &format!("encode_symbol: id not found {}", id)); } } rbml_w.end_tag(); @@ -341,8 +341,8 @@ fn encode_enum_variant_info(ecx: &EncodeContext, encode_name(rbml_w, variant.node.name.name); encode_parent_item(rbml_w, local_def(id)); encode_visibility(rbml_w, variant.node.vis); - encode_attributes(rbml_w, &variant.node.attrs[]); - encode_repr_attrs(rbml_w, ecx, &variant.node.attrs[]); + encode_attributes(rbml_w, &variant.node.attrs); + encode_repr_attrs(rbml_w, ecx, &variant.node.attrs); let stab = stability::lookup(ecx.tcx, ast_util::local_def(variant.node.id)); encode_stability(rbml_w, stab); @@ -394,12 +394,12 @@ fn encode_reexported_static_method(rbml_w: &mut Encoder, exp.name, token::get_name(method_name)); rbml_w.start_tag(tag_items_data_item_reexport); rbml_w.start_tag(tag_items_data_item_reexport_def_id); - rbml_w.wr_str(&def_to_string(method_def_id)[]); + rbml_w.wr_str(&def_to_string(method_def_id)); rbml_w.end_tag(); rbml_w.start_tag(tag_items_data_item_reexport_name); rbml_w.wr_str(&format!("{}::{}", exp.name, - token::get_name(method_name))[]); + token::get_name(method_name))); rbml_w.end_tag(); rbml_w.end_tag(); } @@ -537,7 +537,7 @@ fn encode_reexports(ecx: &EncodeContext, id); rbml_w.start_tag(tag_items_data_item_reexport); rbml_w.start_tag(tag_items_data_item_reexport_def_id); - rbml_w.wr_str(&def_to_string(exp.def_id)[]); + rbml_w.wr_str(&def_to_string(exp.def_id)); rbml_w.end_tag(); rbml_w.start_tag(tag_items_data_item_reexport_name); rbml_w.wr_str(exp.name.as_str()); @@ -570,13 +570,13 @@ fn encode_info_for_mod(ecx: &EncodeContext, // Encode info about all the module children. for item in &md.items { rbml_w.start_tag(tag_mod_child); - rbml_w.wr_str(&def_to_string(local_def(item.id))[]); + rbml_w.wr_str(&def_to_string(local_def(item.id))); rbml_w.end_tag(); each_auxiliary_node_id(&**item, |auxiliary_node_id| { rbml_w.start_tag(tag_mod_child); rbml_w.wr_str(&def_to_string(local_def( - auxiliary_node_id))[]); + auxiliary_node_id))); rbml_w.end_tag(); true }); @@ -588,7 +588,7 @@ fn encode_info_for_mod(ecx: &EncodeContext, did, ecx.tcx.map.node_to_string(did)); rbml_w.start_tag(tag_mod_impl); - rbml_w.wr_str(&def_to_string(local_def(did))[]); + rbml_w.wr_str(&def_to_string(local_def(did))); rbml_w.end_tag(); } } @@ -623,7 +623,7 @@ fn encode_visibility(rbml_w: &mut Encoder, visibility: ast::Visibility) { ast::Public => 'y', ast::Inherited => 'i', }; - rbml_w.wr_str(&ch.to_string()[]); + rbml_w.wr_str(&ch.to_string()); rbml_w.end_tag(); } @@ -783,7 +783,7 @@ fn encode_generics<'a, 'tcx>(rbml_w: &mut Encoder, rbml_w.end_tag(); rbml_w.wr_tagged_str(tag_region_param_def_def_id, - &def_to_string(param.def_id)[]); + &def_to_string(param.def_id)); rbml_w.wr_tagged_u64(tag_region_param_def_space, param.space.to_uint() as u64); @@ -858,10 +858,10 @@ fn encode_info_for_method<'a, 'tcx>(ecx: &EncodeContext<'a, 'tcx>, encode_path(rbml_w, impl_path.chain(Some(elem).into_iter())); match ast_item_opt { Some(&ast::MethodImplItem(ref ast_method)) => { - encode_attributes(rbml_w, &ast_method.attrs[]); + encode_attributes(rbml_w, &ast_method.attrs); let scheme = ty::lookup_item_type(ecx.tcx, m.def_id); let any_types = !scheme.generics.types.is_empty(); - if any_types || is_default_impl || should_inline(&ast_method.attrs[]) { + if any_types || is_default_impl || should_inline(&ast_method.attrs) { encode_inlined_item(ecx, rbml_w, IIImplItemRef(local_def(parent_id), ast_item_opt.unwrap())); } @@ -906,7 +906,7 @@ fn encode_info_for_associated_type(ecx: &EncodeContext, match typedef_opt { None => {} Some(typedef) => { - encode_attributes(rbml_w, &typedef.attrs[]); + encode_attributes(rbml_w, &typedef.attrs); encode_type(ecx, rbml_w, ty::node_id_to_type(ecx.tcx, typedef.id)); } @@ -1040,7 +1040,7 @@ fn encode_info_for_item(ecx: &EncodeContext, encode_path(rbml_w, path); encode_visibility(rbml_w, vis); encode_stability(rbml_w, stab); - encode_attributes(rbml_w, &item.attrs[]); + encode_attributes(rbml_w, &item.attrs); rbml_w.end_tag(); } ast::ItemConst(_, _) => { @@ -1066,8 +1066,8 @@ fn encode_info_for_item(ecx: &EncodeContext, encode_bounds_and_type_for_item(rbml_w, ecx, item.id); encode_name(rbml_w, item.ident.name); encode_path(rbml_w, path); - encode_attributes(rbml_w, &item.attrs[]); - if tps_len > 0 || should_inline(&item.attrs[]) { + encode_attributes(rbml_w, &item.attrs); + if tps_len > 0 || should_inline(&item.attrs) { encode_inlined_item(ecx, rbml_w, IIItemRef(item)); } if tps_len == 0 { @@ -1083,7 +1083,7 @@ fn encode_info_for_item(ecx: &EncodeContext, encode_info_for_mod(ecx, rbml_w, m, - &item.attrs[], + &item.attrs, item.id, path, item.ident, @@ -1100,7 +1100,7 @@ fn encode_info_for_item(ecx: &EncodeContext, // Encode all the items in this module. for foreign_item in &fm.items { rbml_w.start_tag(tag_mod_child); - rbml_w.wr_str(&def_to_string(local_def(foreign_item.id))[]); + rbml_w.wr_str(&def_to_string(local_def(foreign_item.id))); rbml_w.end_tag(); } encode_visibility(rbml_w, vis); @@ -1128,8 +1128,8 @@ fn encode_info_for_item(ecx: &EncodeContext, encode_item_variances(rbml_w, ecx, item.id); encode_bounds_and_type_for_item(rbml_w, ecx, item.id); encode_name(rbml_w, item.ident.name); - encode_attributes(rbml_w, &item.attrs[]); - encode_repr_attrs(rbml_w, ecx, &item.attrs[]); + encode_attributes(rbml_w, &item.attrs); + encode_repr_attrs(rbml_w, ecx, &item.attrs); for v in &enum_definition.variants { encode_variant_id(rbml_w, local_def(v.node.id)); } @@ -1146,7 +1146,7 @@ fn encode_info_for_item(ecx: &EncodeContext, encode_enum_variant_info(ecx, rbml_w, item.id, - &(*enum_definition).variants[], + &(*enum_definition).variants, index); } ast::ItemStruct(ref struct_def, _) => { @@ -1172,11 +1172,11 @@ fn encode_info_for_item(ecx: &EncodeContext, encode_item_variances(rbml_w, ecx, item.id); encode_name(rbml_w, item.ident.name); - encode_attributes(rbml_w, &item.attrs[]); + encode_attributes(rbml_w, &item.attrs); encode_path(rbml_w, path.clone()); encode_stability(rbml_w, stab); encode_visibility(rbml_w, vis); - encode_repr_attrs(rbml_w, ecx, &item.attrs[]); + encode_repr_attrs(rbml_w, ecx, &item.attrs); /* Encode def_ids for each field and method for methods, write all the stuff get_trait_method @@ -1213,7 +1213,7 @@ fn encode_info_for_item(ecx: &EncodeContext, encode_family(rbml_w, 'i'); encode_bounds_and_type_for_item(rbml_w, ecx, item.id); encode_name(rbml_w, item.ident.name); - encode_attributes(rbml_w, &item.attrs[]); + encode_attributes(rbml_w, &item.attrs); encode_unsafety(rbml_w, unsafety); encode_polarity(rbml_w, polarity); match ty.node { @@ -1319,7 +1319,7 @@ fn encode_info_for_item(ecx: &EncodeContext, encode_generics(rbml_w, ecx, &trait_def.generics, &trait_predicates, tag_item_generics); encode_trait_ref(rbml_w, ecx, &*trait_def.trait_ref, tag_item_trait_ref); encode_name(rbml_w, item.ident.name); - encode_attributes(rbml_w, &item.attrs[]); + encode_attributes(rbml_w, &item.attrs); encode_visibility(rbml_w, vis); encode_stability(rbml_w, stab); for &method_def_id in &*ty::trait_item_def_ids(tcx, def_id) { @@ -1337,7 +1337,7 @@ fn encode_info_for_item(ecx: &EncodeContext, rbml_w.end_tag(); rbml_w.start_tag(tag_mod_child); - rbml_w.wr_str(&def_to_string(method_def_id.def_id())[]); + rbml_w.wr_str(&def_to_string(method_def_id.def_id())); rbml_w.end_tag(); } encode_path(rbml_w, path.clone()); @@ -1426,14 +1426,14 @@ fn encode_info_for_item(ecx: &EncodeContext, }; match trait_item { &ast::RequiredMethod(ref m) => { - encode_attributes(rbml_w, &m.attrs[]); + encode_attributes(rbml_w, &m.attrs); encode_trait_item(rbml_w); encode_item_sort(rbml_w, 'r'); encode_method_argument_names(rbml_w, &*m.decl); } &ast::ProvidedMethod(ref m) => { - encode_attributes(rbml_w, &m.attrs[]); + encode_attributes(rbml_w, &m.attrs); encode_trait_item(rbml_w); encode_item_sort(rbml_w, 'p'); encode_inlined_item(ecx, rbml_w, IITraitItemRef(def_id, trait_item)); @@ -1442,7 +1442,7 @@ fn encode_info_for_item(ecx: &EncodeContext, &ast::TypeTraitItem(ref associated_type) => { encode_attributes(rbml_w, - &associated_type.attrs[]); + &associated_type.attrs); encode_item_sort(rbml_w, 't'); } } @@ -1588,48 +1588,6 @@ fn encode_info_for_items(ecx: &EncodeContext, // Path and definition ID indexing -#[cfg(stage0)] -fn encode_index(rbml_w: &mut Encoder, index: Vec>, mut write_fn: F) where - F: FnMut(&mut SeekableMemWriter, &T), - T: Hash, -{ - let mut buckets: Vec>> = (0..256u16).map(|_| Vec::new()).collect(); - for elt in index { - let mut s = SipHasher::new(); - elt.val.hash(&mut s); - let h = s.finish() as uint; - (&mut buckets[h % 256]).push(elt); - } - - rbml_w.start_tag(tag_index); - let mut bucket_locs = Vec::new(); - rbml_w.start_tag(tag_index_buckets); - for bucket in &buckets { - bucket_locs.push(rbml_w.writer.tell().unwrap()); - rbml_w.start_tag(tag_index_buckets_bucket); - for elt in bucket { - rbml_w.start_tag(tag_index_buckets_bucket_elt); - assert!(elt.pos < 0xffff_ffff); - { - let wr: &mut SeekableMemWriter = rbml_w.writer; - wr.write_be_u32(elt.pos as u32); - } - write_fn(rbml_w.writer, &elt.val); - rbml_w.end_tag(); - } - rbml_w.end_tag(); - } - rbml_w.end_tag(); - rbml_w.start_tag(tag_index_table); - for pos in &bucket_locs { - assert!(*pos < 0xffff_ffff); - let wr: &mut SeekableMemWriter = rbml_w.writer; - wr.write_be_u32(*pos as u32); - } - rbml_w.end_tag(); - rbml_w.end_tag(); -} -#[cfg(not(stage0))] fn encode_index(rbml_w: &mut Encoder, index: Vec>, mut write_fn: F) where F: FnMut(&mut SeekableMemWriter, &T), T: Hash, @@ -1867,10 +1825,10 @@ fn encode_macro_defs(rbml_w: &mut Encoder, rbml_w.start_tag(tag_macro_def); encode_name(rbml_w, def.ident.name); - encode_attributes(rbml_w, &def.attrs[]); + encode_attributes(rbml_w, &def.attrs); rbml_w.start_tag(tag_macro_def_body); - rbml_w.wr_str(&pprust::tts_to_string(&def.body[])[]); + rbml_w.wr_str(&pprust::tts_to_string(&def.body)); rbml_w.end_tag(); rbml_w.end_tag(); @@ -1887,7 +1845,7 @@ fn encode_struct_field_attrs(rbml_w: &mut Encoder, krate: &ast::Crate) { fn visit_struct_field(&mut self, field: &ast::StructField) { self.rbml_w.start_tag(tag_struct_field); self.rbml_w.wr_tagged_u32(tag_struct_field_id, field.node.id); - encode_attributes(self.rbml_w, &field.node.attrs[]); + encode_attributes(self.rbml_w, &field.node.attrs); self.rbml_w.end_tag(); } } @@ -1959,13 +1917,13 @@ fn encode_misc_info(ecx: &EncodeContext, rbml_w.start_tag(tag_misc_info_crate_items); for item in &krate.module.items { rbml_w.start_tag(tag_mod_child); - rbml_w.wr_str(&def_to_string(local_def(item.id))[]); + rbml_w.wr_str(&def_to_string(local_def(item.id))); rbml_w.end_tag(); each_auxiliary_node_id(&**item, |auxiliary_node_id| { rbml_w.start_tag(tag_mod_child); rbml_w.wr_str(&def_to_string(local_def( - auxiliary_node_id))[]); + auxiliary_node_id))); rbml_w.end_tag(); true }); @@ -2132,17 +2090,17 @@ fn encode_metadata_inner(wr: &mut SeekableMemWriter, let mut rbml_w = writer::Encoder::new(wr); - encode_crate_name(&mut rbml_w, &ecx.link_meta.crate_name[]); + encode_crate_name(&mut rbml_w, &ecx.link_meta.crate_name); encode_crate_triple(&mut rbml_w, &tcx.sess .opts .target_triple - []); + ); encode_hash(&mut rbml_w, &ecx.link_meta.crate_hash); encode_dylib_dependency_formats(&mut rbml_w, &ecx); let mut i = rbml_w.writer.tell().unwrap(); - encode_attributes(&mut rbml_w, &krate.attrs[]); + encode_attributes(&mut rbml_w, &krate.attrs); stats.attr_bytes = rbml_w.writer.tell().unwrap() - i; i = rbml_w.writer.tell().unwrap(); diff --git a/src/librustc/metadata/loader.rs b/src/librustc/metadata/loader.rs index 01d1f4e7011f8..fbc3e76cf934b 100644 --- a/src/librustc/metadata/loader.rs +++ b/src/librustc/metadata/loader.rs @@ -329,7 +329,7 @@ impl<'a> Context<'a> { for (i, &CrateMismatch{ ref path, ref got }) in mismatches.enumerate() { self.sess.fileline_note(self.span, &format!("crate `{}`, path #{}, triple {}: {}", - self.ident, i+1, got, path.display())[]); + self.ident, i+1, got, path.display())); } } if self.rejected_via_hash.len() > 0 { @@ -339,7 +339,7 @@ impl<'a> Context<'a> { for (i, &CrateMismatch{ ref path, .. }) in mismatches.enumerate() { self.sess.fileline_note(self.span, &format!("crate `{}` path #{}: {}", - self.ident, i+1, path.display())[]); + self.ident, i+1, path.display())); } match self.root { &None => {} @@ -347,7 +347,7 @@ impl<'a> Context<'a> { for (i, path) in r.paths().iter().enumerate() { self.sess.fileline_note(self.span, &format!("crate `{}` path #{}: {}", - r.ident, i+1, path.display())[]); + r.ident, i+1, path.display())); } } } @@ -359,7 +359,7 @@ impl<'a> Context<'a> { for (i, &CrateMismatch { ref path, .. }) in mismatches.enumerate() { self.sess.fileline_note(self.span, &format!("crate `{}` path #{}: {}", - self.ident, i+1, path.display())[]); + self.ident, i+1, path.display())); } } self.sess.abort_if_errors(); @@ -472,26 +472,26 @@ impl<'a> Context<'a> { _ => { self.sess.span_err(self.span, &format!("multiple matching crates for `{}`", - self.crate_name)[]); + self.crate_name)); self.sess.note("candidates:"); for lib in &libraries { match lib.dylib { Some((ref p, _)) => { self.sess.note(&format!("path: {}", - p.display())[]); + p.display())); } None => {} } match lib.rlib { Some((ref p, _)) => { self.sess.note(&format!("path: {}", - p.display())[]); + p.display())); } None => {} } let data = lib.metadata.as_slice(); let name = decoder::get_crate_name(data); - note_crate_name(self.sess.diagnostic(), &name[]); + note_crate_name(self.sess.diagnostic(), &name); } None } @@ -545,11 +545,11 @@ impl<'a> Context<'a> { &format!("multiple {} candidates for `{}` \ found", flavor, - self.crate_name)[]); + self.crate_name)); self.sess.span_note(self.span, &format!(r"candidate #1: {}", ret.as_ref().unwrap().0 - .display())[]); + .display())); error = 1; ret = None; } @@ -557,7 +557,7 @@ impl<'a> Context<'a> { error += 1; self.sess.span_note(self.span, &format!(r"candidate #{}: {}", error, - lib.display())[]); + lib.display())); continue } *slot = Some(metadata); @@ -630,14 +630,14 @@ impl<'a> Context<'a> { let locs = locs.iter().map(|l| Path::new(&l[..])).filter(|loc| { if !loc.exists() { sess.err(&format!("extern location for {} does not exist: {}", - self.crate_name, loc.display())[]); + self.crate_name, loc.display())); return false; } let file = match loc.filename_str() { Some(file) => file, None => { sess.err(&format!("extern location for {} is not a file: {}", - self.crate_name, loc.display())[]); + self.crate_name, loc.display())); return false; } }; @@ -651,7 +651,7 @@ impl<'a> Context<'a> { } } sess.err(&format!("extern location for {} is of an unknown type: {}", - self.crate_name, loc.display())[]); + self.crate_name, loc.display())); false }); @@ -686,7 +686,7 @@ impl<'a> Context<'a> { } pub fn note_crate_name(diag: &SpanHandler, name: &str) { - diag.handler().note(&format!("crate name: {}", name)[]); + diag.handler().note(&format!("crate name: {}", name)); } impl ArchiveMetadata { diff --git a/src/librustc/metadata/macro_import.rs b/src/librustc/metadata/macro_import.rs index 28c98d455f046..d25dc4f58a5df 100644 --- a/src/librustc/metadata/macro_import.rs +++ b/src/librustc/metadata/macro_import.rs @@ -78,7 +78,7 @@ impl<'a, 'v> Visitor<'v> for MacroLoader<'a> { for attr in &item.attrs { let mut used = true; - match &attr.name()[] { + match &attr.name()[..] { "phase" => { self.sess.span_err(attr.span, "#[phase] is deprecated"); } @@ -86,7 +86,7 @@ impl<'a, 'v> Visitor<'v> for MacroLoader<'a> { self.sess.span_err(attr.span, "#[plugin] on `extern crate` is deprecated"); self.sess.span_help(attr.span, &format!("use a crate attribute instead, \ i.e. #![plugin({})]", - item.ident.as_str())[]); + item.ident.as_str())); } "macro_use" => { let names = attr.meta_item_list(); diff --git a/src/librustc/metadata/tydecode.rs b/src/librustc/metadata/tydecode.rs index 5805725a8fc8b..4a45b7fbfdcc7 100644 --- a/src/librustc/metadata/tydecode.rs +++ b/src/librustc/metadata/tydecode.rs @@ -305,7 +305,7 @@ fn parse_bound_region_(st: &mut PState, conv: &mut F) -> ty::BoundRegion wher } '[' => { let def = parse_def_(st, RegionParameter, conv); - let ident = token::str_to_ident(&parse_str(st, ']')[]); + let ident = token::str_to_ident(&parse_str(st, ']')); ty::BrNamed(def, ident.name) } 'f' => { @@ -344,7 +344,7 @@ fn parse_region_(st: &mut PState, conv: &mut F) -> ty::Region where assert_eq!(next(st), '|'); let index = parse_u32(st); assert_eq!(next(st), '|'); - let nm = token::str_to_ident(&parse_str(st, ']')[]); + let nm = token::str_to_ident(&parse_str(st, ']')); ty::ReEarlyBound(node_id, space, index, nm.name) } 'f' => { @@ -485,7 +485,7 @@ fn parse_ty_<'a, 'tcx, F>(st: &mut PState<'a, 'tcx>, conv: &mut F) -> Ty<'tcx> w assert_eq!(next(st), '|'); let space = parse_param_space(st); assert_eq!(next(st), '|'); - let name = token::intern(&parse_str(st, ']')[]); + let name = token::intern(&parse_str(st, ']')); return ty::mk_param(tcx, space, index, name); } '~' => return ty::mk_uniq(tcx, parse_ty_(st, conv)), diff --git a/src/librustc/middle/astconv_util.rs b/src/librustc/middle/astconv_util.rs index 7143e3caac208..d699ba40e8220 100644 --- a/src/librustc/middle/astconv_util.rs +++ b/src/librustc/middle/astconv_util.rs @@ -48,7 +48,7 @@ pub fn ast_ty_to_prim_ty<'tcx>(tcx: &ty::ctxt<'tcx>, ast_ty: &ast::Ty) None => { tcx.sess.span_bug(ast_ty.span, &format!("unbound path {}", - path.repr(tcx))[]) + path.repr(tcx))) } Some(&d) => d }; diff --git a/src/librustc/middle/astencode.rs b/src/librustc/middle/astencode.rs index ae10eb686b010..eb723830d383c 100644 --- a/src/librustc/middle/astencode.rs +++ b/src/librustc/middle/astencode.rs @@ -1852,7 +1852,7 @@ fn decode_side_tables(dcx: &DecodeContext, None => { dcx.tcx.sess.bug( &format!("unknown tag found in side tables: {:x}", - tag)[]); + tag)); } Some(value) => { let val_doc = entry_doc.get(c::tag_table_val as uint); @@ -1937,7 +1937,7 @@ fn decode_side_tables(dcx: &DecodeContext, _ => { dcx.tcx.sess.bug( &format!("unknown tag found in side tables: {:x}", - tag)[]); + tag)); } } } diff --git a/src/librustc/middle/cfg/construct.rs b/src/librustc/middle/cfg/construct.rs index d39b94a202e4a..d95dfb6feaec4 100644 --- a/src/librustc/middle/cfg/construct.rs +++ b/src/librustc/middle/cfg/construct.rs @@ -327,7 +327,7 @@ impl<'a, 'tcx> CFGBuilder<'a, 'tcx> { let mut cond_exit = discr_exit; for arm in arms { cond_exit = self.add_dummy_node(&[cond_exit]); // 2 - let pats_exit = self.pats_any(&arm.pats[], + let pats_exit = self.pats_any(&arm.pats, cond_exit); // 3 let guard_exit = self.opt_expr(&arm.guard, pats_exit); // 4 @@ -582,14 +582,14 @@ impl<'a, 'tcx> CFGBuilder<'a, 'tcx> { self.tcx.sess.span_bug( expr.span, &format!("no loop scope for id {}", - loop_id)[]); + loop_id)); } r => { self.tcx.sess.span_bug( expr.span, &format!("bad entry `{:?}` in def_map for label", - r)[]); + r)); } } } diff --git a/src/librustc/middle/cfg/graphviz.rs b/src/librustc/middle/cfg/graphviz.rs index 46b4a51c9d6fe..14c6ff01e0e66 100644 --- a/src/librustc/middle/cfg/graphviz.rs +++ b/src/librustc/middle/cfg/graphviz.rs @@ -54,7 +54,7 @@ fn replace_newline_with_backslash_l(s: String) -> String { } impl<'a, 'ast> dot::Labeller<'a, Node<'a>, Edge<'a>> for LabelledCFG<'a, 'ast> { - fn graph_id(&'a self) -> dot::Id<'a> { dot::Id::new(&self.name[]).ok().unwrap() } + fn graph_id(&'a self) -> dot::Id<'a> { dot::Id::new(&self.name[..]).ok().unwrap() } fn node_id(&'a self, &(i,_): &Node<'a>) -> dot::Id<'a> { dot::Id::new(format!("N{}", i.node_id())).ok().unwrap() @@ -92,7 +92,7 @@ impl<'a, 'ast> dot::Labeller<'a, Node<'a>, Edge<'a>> for LabelledCFG<'a, 'ast> { let s = replace_newline_with_backslash_l(s); label.push_str(&format!("exiting scope_{} {}", i, - &s[..])[]); + &s[..])); } dot::LabelText::EscStr(label.into_cow()) } diff --git a/src/librustc/middle/check_const.rs b/src/librustc/middle/check_const.rs index 41d425cd2f665..f1c8ad947642f 100644 --- a/src/librustc/middle/check_const.rs +++ b/src/librustc/middle/check_const.rs @@ -176,7 +176,7 @@ impl<'a, 'tcx> CheckCrateVisitor<'a, 'tcx> { }; self.tcx.sess.span_err(e.span, &format!("mutable statics are not allowed \ - to have {}", suffix)[]); + to have {}", suffix)); } fn check_static_type(&self, e: &ast::Expr) { @@ -382,7 +382,7 @@ fn check_expr<'a, 'tcx>(v: &mut CheckCrateVisitor<'a, 'tcx>, if v.mode != Mode::Var { v.tcx.sess.span_err(e.span, &format!("{}s are not allowed to have destructors", - v.msg())[]); + v.msg())); } } _ => {} diff --git a/src/librustc/middle/check_match.rs b/src/librustc/middle/check_match.rs index 86c59b24e3e93..7bd64a4f487d6 100644 --- a/src/librustc/middle/check_match.rs +++ b/src/librustc/middle/check_match.rs @@ -163,7 +163,7 @@ fn check_expr(cx: &mut MatchCheckCtxt, ex: &ast::Expr) { // First, check legality of move bindings. check_legality_of_move_bindings(cx, arm.guard.is_some(), - &arm.pats[]); + &arm.pats); // Second, if there is a guard on each arm, make sure it isn't // assigning or borrowing anything mutably. @@ -1101,7 +1101,7 @@ fn check_legality_of_move_bindings(cx: &MatchCheckCtxt, &format!("binding pattern {} is not an \ identifier: {:?}", p.id, - p.node)[]); + p.node)); } } } diff --git a/src/librustc/middle/dataflow.rs b/src/librustc/middle/dataflow.rs index 085d5cbc347e5..cf33cd7136578 100644 --- a/src/librustc/middle/dataflow.rs +++ b/src/librustc/middle/dataflow.rs @@ -554,7 +554,7 @@ fn bits_to_string(words: &[uint]) -> String { let mut v = word; for _ in 0..usize::BYTES { result.push(sep); - result.push_str(&format!("{:02x}", v & 0xFF)[]); + result.push_str(&format!("{:02x}", v & 0xFF)); v >>= 8; sep = '-'; } diff --git a/src/librustc/middle/dependency_format.rs b/src/librustc/middle/dependency_format.rs index ad9f4eade5c90..40e7610582f9c 100644 --- a/src/librustc/middle/dependency_format.rs +++ b/src/librustc/middle/dependency_format.rs @@ -118,7 +118,7 @@ fn calculate_type(sess: &session::Session, let src = sess.cstore.get_used_crate_source(cnum).unwrap(); if src.rlib.is_some() { return } sess.err(&format!("dependency `{}` not found in rlib format", - data.name)[]); + data.name)); }); return Vec::new(); } @@ -197,7 +197,7 @@ fn calculate_type(sess: &session::Session, match kind { cstore::RequireStatic => "rlib", cstore::RequireDynamic => "dylib", - })[]); + })); } } } @@ -222,7 +222,7 @@ fn add_library(sess: &session::Session, let data = sess.cstore.get_crate_data(cnum); sess.err(&format!("cannot satisfy dependencies so `{}` only \ shows up once", - data.name)[]); + data.name)); sess.help("having upstream crates all available in one format \ will likely make this go away"); } diff --git a/src/librustc/middle/effect.rs b/src/librustc/middle/effect.rs index abb8f35f662b5..ba81b2f3899a8 100644 --- a/src/librustc/middle/effect.rs +++ b/src/librustc/middle/effect.rs @@ -9,7 +9,7 @@ // except according to those terms. //! Enforces the Rust effect system. Currently there is just one effect, -/// `unsafe`. +//! `unsafe`. use self::UnsafeContext::*; use middle::def; diff --git a/src/librustc/middle/expr_use_visitor.rs b/src/librustc/middle/expr_use_visitor.rs index e99d214742a0b..625093e3c5dea 100644 --- a/src/librustc/middle/expr_use_visitor.rs +++ b/src/librustc/middle/expr_use_visitor.rs @@ -841,7 +841,7 @@ impl<'d,'t,'tcx,TYPER:mc::Typer<'tcx>> ExprUseVisitor<'d,'t,'tcx,TYPER> { ty::ty_rptr(r, ref m) => (m.mutbl, r), _ => self.tcx().sess.span_bug(expr.span, &format!("bad overloaded deref type {}", - method_ty.repr(self.tcx()))[]) + method_ty.repr(self.tcx()))) }; let bk = ty::BorrowKind::from_mutbl(m); self.delegate.borrow(expr.id, expr.span, cmt, diff --git a/src/librustc/middle/infer/combine.rs b/src/librustc/middle/infer/combine.rs index 0eeafb767d8a6..99cb2a0978e7e 100644 --- a/src/librustc/middle/infer/combine.rs +++ b/src/librustc/middle/infer/combine.rs @@ -433,7 +433,7 @@ pub fn super_tys<'tcx, C: Combine<'tcx>>(this: &C, &format!("{}: bot and var types should have been handled ({},{})", this.tag(), a.repr(this.infcx().tcx), - b.repr(this.infcx().tcx))[]); + b.repr(this.infcx().tcx))); } (&ty::ty_err, _) | (_, &ty::ty_err) => { @@ -818,7 +818,7 @@ impl<'cx, 'tcx> ty_fold::TypeFolder<'tcx> for Generalizer<'cx, 'tcx> { self.tcx().sess.span_bug( self.span, &format!("Encountered early bound region when generalizing: {}", - r.repr(self.tcx()))[]); + r.repr(self.tcx()))); } // Always make a fresh region variable for skolemized regions; diff --git a/src/librustc/middle/infer/error_reporting.rs b/src/librustc/middle/infer/error_reporting.rs index 53032f9b9ac64..110c7bf41e559 100644 --- a/src/librustc/middle/infer/error_reporting.rs +++ b/src/librustc/middle/infer/error_reporting.rs @@ -449,7 +449,7 @@ impl<'a, 'tcx> ErrorReporting<'tcx> for InferCtxt<'a, 'tcx> { &format!( "consider adding an explicit lifetime bound `{}: {}`...", bound_kind.user_string(self.tcx), - sub.user_string(self.tcx))[]); + sub.user_string(self.tcx))); } ty::ReStatic => { @@ -460,7 +460,7 @@ impl<'a, 'tcx> ErrorReporting<'tcx> for InferCtxt<'a, 'tcx> { origin.span(), &format!( "consider adding an explicit lifetime bound `{}: 'static`...", - bound_kind.user_string(self.tcx))[]); + bound_kind.user_string(self.tcx))); } _ => { @@ -472,10 +472,10 @@ impl<'a, 'tcx> ErrorReporting<'tcx> for InferCtxt<'a, 'tcx> { origin.span(), &format!( "consider adding an explicit lifetime bound for `{}`", - bound_kind.user_string(self.tcx))[]); + bound_kind.user_string(self.tcx))); note_and_explain_region( self.tcx, - &format!("{} must be valid for ", labeled_user_string)[], + &format!("{} must be valid for ", labeled_user_string), sub, "..."); } @@ -525,7 +525,7 @@ impl<'a, 'tcx> ErrorReporting<'tcx> for InferCtxt<'a, 'tcx> { &format!("...but `{}` is only valid for ", ty::local_var_name_str(self.tcx, upvar_id.var_id) - .to_string())[], + .to_string()), sup, ""); } @@ -568,7 +568,7 @@ impl<'a, 'tcx> ErrorReporting<'tcx> for InferCtxt<'a, 'tcx> { &format!("captured variable `{}` does not \ outlive the enclosing closure", ty::local_var_name_str(self.tcx, - id).to_string())[]); + id).to_string())); note_and_explain_region( self.tcx, "captured variable is valid for ", @@ -610,7 +610,7 @@ impl<'a, 'tcx> ErrorReporting<'tcx> for InferCtxt<'a, 'tcx> { span, &format!("the type `{}` does not fulfill the \ required lifetime", - self.ty_to_string(ty))[]); + self.ty_to_string(ty))); note_and_explain_region(self.tcx, "type must outlive ", sub, @@ -636,7 +636,7 @@ impl<'a, 'tcx> ErrorReporting<'tcx> for InferCtxt<'a, 'tcx> { span, &format!("the type `{}` (provided as the value of \ a type parameter) is not valid at this point", - self.ty_to_string(ty))[]); + self.ty_to_string(ty))); note_and_explain_region(self.tcx, "type must outlive ", sub, @@ -713,7 +713,7 @@ impl<'a, 'tcx> ErrorReporting<'tcx> for InferCtxt<'a, 'tcx> { span, &format!("type of expression contains references \ that are not valid during the expression: `{}`", - self.ty_to_string(t))[]); + self.ty_to_string(t))); note_and_explain_region( self.tcx, "type is only valid for ", @@ -752,7 +752,7 @@ impl<'a, 'tcx> ErrorReporting<'tcx> for InferCtxt<'a, 'tcx> { span, &format!("in type `{}`, reference has a longer lifetime \ than the data it references", - self.ty_to_string(ty))[]); + self.ty_to_string(ty))); note_and_explain_region( self.tcx, "the pointer is valid for ", @@ -988,7 +988,7 @@ impl<'a, 'tcx> Rebuilder<'a, 'tcx> { names.push(lt_name); } names.sort(); - let name = token::str_to_ident(&names[0][]).name; + let name = token::str_to_ident(&names[0]).name; return (name_to_dummy_lifetime(name), Kept); } return (self.life_giver.give_lifetime(), Fresh); @@ -1240,7 +1240,7 @@ impl<'a, 'tcx> Rebuilder<'a, 'tcx> { .sess .fatal(&format!( "unbound path {}", - pprust::path_to_string(path))[]) + pprust::path_to_string(path))) } Some(&d) => d }; @@ -1479,7 +1479,7 @@ impl<'a, 'tcx> ErrorReportingHelpers<'tcx> for InferCtxt<'a, 'tcx> { var_origin.span(), &format!("cannot infer an appropriate lifetime{} \ due to conflicting requirements", - var_description)[]); + var_description)); } fn note_region_origin(&self, origin: &SubregionOrigin<'tcx>) { @@ -1527,7 +1527,7 @@ impl<'a, 'tcx> ErrorReportingHelpers<'tcx> for InferCtxt<'a, 'tcx> { self.tcx.sess.span_note( trace.origin.span(), &format!("...so that {} ({})", - desc, values_str)[]); + desc, values_str)); } None => { // Really should avoid printing this error at @@ -1536,7 +1536,7 @@ impl<'a, 'tcx> ErrorReportingHelpers<'tcx> for InferCtxt<'a, 'tcx> { // doing right now. - nmatsakis self.tcx.sess.span_note( trace.origin.span(), - &format!("...so that {}", desc)[]); + &format!("...so that {}", desc)); } } } @@ -1552,7 +1552,7 @@ impl<'a, 'tcx> ErrorReportingHelpers<'tcx> for InferCtxt<'a, 'tcx> { &format!( "...so that closure can access `{}`", ty::local_var_name_str(self.tcx, upvar_id.var_id) - .to_string())[]) + .to_string())) } infer::InfStackClosure(span) => { self.tcx.sess.span_note( @@ -1577,7 +1577,7 @@ impl<'a, 'tcx> ErrorReportingHelpers<'tcx> for InferCtxt<'a, 'tcx> { does not outlive the enclosing closure", ty::local_var_name_str( self.tcx, - id).to_string())[]); + id).to_string())); } infer::IndexSlice(span) => { self.tcx.sess.span_note( @@ -1626,7 +1626,7 @@ impl<'a, 'tcx> ErrorReportingHelpers<'tcx> for InferCtxt<'a, 'tcx> { span, &format!("...so type `{}` of expression is valid during the \ expression", - self.ty_to_string(t))[]); + self.ty_to_string(t))); } infer::BindingTypeIsNotValidAtDecl(span) => { self.tcx.sess.span_note( @@ -1638,14 +1638,14 @@ impl<'a, 'tcx> ErrorReportingHelpers<'tcx> for InferCtxt<'a, 'tcx> { span, &format!("...so that the reference type `{}` \ does not outlive the data it points at", - self.ty_to_string(ty))[]); + self.ty_to_string(ty))); } infer::RelateParamBound(span, t) => { self.tcx.sess.span_note( span, &format!("...so that the type `{}` \ will meet its required lifetime bounds", - self.ty_to_string(t))[]); + self.ty_to_string(t))); } infer::RelateDefaultParamBound(span, t) => { self.tcx.sess.span_note( @@ -1653,13 +1653,13 @@ impl<'a, 'tcx> ErrorReportingHelpers<'tcx> for InferCtxt<'a, 'tcx> { &format!("...so that type parameter \ instantiated with `{}`, \ will meet its declared lifetime bounds", - self.ty_to_string(t))[]); + self.ty_to_string(t))); } infer::RelateRegionParamBound(span) => { self.tcx.sess.span_note( span, &format!("...so that the declared lifetime parameter bounds \ - are satisfied")[]); + are satisfied")); } infer::SafeDestructor(span) => { self.tcx.sess.span_note( @@ -1717,7 +1717,7 @@ fn lifetimes_in_scope(tcx: &ty::ctxt, Some(node) => match node { ast_map::NodeItem(item) => match item.node { ast::ItemFn(_, _, _, ref gen, _) => { - taken.push_all(&gen.lifetimes[]); + taken.push_all(&gen.lifetimes); None }, _ => None @@ -1725,7 +1725,7 @@ fn lifetimes_in_scope(tcx: &ty::ctxt, ast_map::NodeImplItem(ii) => { match *ii { ast::MethodImplItem(ref m) => { - taken.push_all(&m.pe_generics().lifetimes[]); + taken.push_all(&m.pe_generics().lifetimes); Some(m.id) } ast::TypeImplItem(_) => None, @@ -1784,7 +1784,7 @@ impl LifeGiver { let mut lifetime; loop { let mut s = String::from_str("'"); - s.push_str(&num_to_string(self.counter.get())[]); + s.push_str(&num_to_string(self.counter.get())); if !self.taken.contains(&s) { lifetime = name_to_dummy_lifetime( token::str_to_ident(&s[..]).name); diff --git a/src/librustc/middle/infer/higher_ranked/mod.rs b/src/librustc/middle/infer/higher_ranked/mod.rs index a729156c88b35..b0b9a80589d0d 100644 --- a/src/librustc/middle/infer/higher_ranked/mod.rs +++ b/src/librustc/middle/infer/higher_ranked/mod.rs @@ -189,7 +189,7 @@ impl<'tcx,C> HigherRankedRelations<'tcx> for C span, &format!("region {:?} is not associated with \ any bound region from A!", - r0)[]) + r0)) } } @@ -322,7 +322,7 @@ impl<'tcx,C> HigherRankedRelations<'tcx> for C } infcx.tcx.sess.span_bug( span, - &format!("could not find original bound region for {:?}", r)[]); + &format!("could not find original bound region for {:?}", r)); } fn fresh_bound_variable(infcx: &InferCtxt, debruijn: ty::DebruijnIndex) -> ty::Region { @@ -339,7 +339,7 @@ fn var_ids<'tcx, T: Combine<'tcx>>(combiner: &T, r => { combiner.infcx().tcx.sess.span_bug( combiner.trace().origin.span(), - &format!("found non-region-vid: {:?}", r)[]); + &format!("found non-region-vid: {:?}", r)); } }).collect() } diff --git a/src/librustc/middle/infer/mod.rs b/src/librustc/middle/infer/mod.rs index b0576ff55ff73..835964828d419 100644 --- a/src/librustc/middle/infer/mod.rs +++ b/src/librustc/middle/infer/mod.rs @@ -999,7 +999,7 @@ impl<'a, 'tcx> InferCtxt<'a, 'tcx> { self.tcx.sess.span_err(sp, &format!("{}{}", mk_msg(resolved_expected.map(|t| self.ty_to_string(t)), actual_ty), - error_str)[]); + error_str)); if let Some(err) = err { ty::note_and_explain_type_err(self.tcx, err) diff --git a/src/librustc/middle/infer/region_inference/mod.rs b/src/librustc/middle/infer/region_inference/mod.rs index b4fd34f206fa7..5959b4a7c507a 100644 --- a/src/librustc/middle/infer/region_inference/mod.rs +++ b/src/librustc/middle/infer/region_inference/mod.rs @@ -473,7 +473,7 @@ impl<'a, 'tcx> RegionVarBindings<'a, 'tcx> { origin.span(), &format!("cannot relate bound region: {} <= {}", sub.repr(self.tcx), - sup.repr(self.tcx))[]); + sup.repr(self.tcx))); } (_, ReStatic) => { // all regions are subregions of static, so we can ignore this @@ -733,7 +733,7 @@ impl<'a, 'tcx> RegionVarBindings<'a, 'tcx> { self.tcx.sess.bug( &format!("cannot relate bound region: LUB({}, {})", a.repr(self.tcx), - b.repr(self.tcx))[]); + b.repr(self.tcx))); } (ReStatic, _) | (_, ReStatic) => { @@ -750,7 +750,7 @@ impl<'a, 'tcx> RegionVarBindings<'a, 'tcx> { &format!("lub_concrete_regions invoked with \ non-concrete regions: {:?}, {:?}", a, - b)[]); + b)); } (ReFree(ref fr), ReScope(s_id)) | @@ -834,7 +834,7 @@ impl<'a, 'tcx> RegionVarBindings<'a, 'tcx> { self.tcx.sess.bug( &format!("cannot relate bound region: GLB({}, {})", a.repr(self.tcx), - b.repr(self.tcx))[]); + b.repr(self.tcx))); } (ReStatic, r) | (r, ReStatic) => { @@ -854,7 +854,7 @@ impl<'a, 'tcx> RegionVarBindings<'a, 'tcx> { &format!("glb_concrete_regions invoked with \ non-concrete regions: {:?}, {:?}", a, - b)[]); + b)); } (ReFree(ref fr), ReScope(s_id)) | @@ -1417,7 +1417,7 @@ impl<'a, 'tcx> RegionVarBindings<'a, 'tcx> { for var {:?}, lower_bounds={}, upper_bounds={}", node_idx, lower_bounds.repr(self.tcx), - upper_bounds.repr(self.tcx))[]); + upper_bounds.repr(self.tcx))); } fn collect_error_for_contracting_node( @@ -1461,7 +1461,7 @@ impl<'a, 'tcx> RegionVarBindings<'a, 'tcx> { &format!("collect_error_for_contracting_node() could not find error \ for var {:?}, upper_bounds={}", node_idx, - upper_bounds.repr(self.tcx))[]); + upper_bounds.repr(self.tcx))); } fn collect_concrete_regions(&self, diff --git a/src/librustc/middle/infer/resolve.rs b/src/librustc/middle/infer/resolve.rs index 7bb3106b0ba6c..547696c0c4c2f 100644 --- a/src/librustc/middle/infer/resolve.rs +++ b/src/librustc/middle/infer/resolve.rs @@ -96,7 +96,7 @@ impl<'a, 'tcx> ty_fold::TypeFolder<'tcx> for FullTypeResolver<'a, 'tcx> { ty::ty_infer(_) => { self.infcx.tcx.sess.bug( &format!("Unexpected type in full type resolver: {}", - t.repr(self.infcx.tcx))[]); + t.repr(self.infcx.tcx))); } _ => { ty_fold::super_fold_ty(self, t) diff --git a/src/librustc/middle/liveness.rs b/src/librustc/middle/liveness.rs index e58136fb3f4e4..145fccd7972bb 100644 --- a/src/librustc/middle/liveness.rs +++ b/src/librustc/middle/liveness.rs @@ -325,7 +325,7 @@ impl<'a, 'tcx> IrMaps<'a, 'tcx> { self.tcx .sess .span_bug(span, &format!("no variable registered for id {}", - node_id)[]); + node_id)); } } } @@ -585,7 +585,7 @@ impl<'a, 'tcx> Liveness<'a, 'tcx> { self.ir.tcx.sess.span_bug( span, &format!("no live node registered for node {}", - node_id)[]); + node_id)); } } } diff --git a/src/librustc/middle/mem_categorization.rs b/src/librustc/middle/mem_categorization.rs index 4be7bb9c365a1..d1fba421bbe58 100644 --- a/src/librustc/middle/mem_categorization.rs +++ b/src/librustc/middle/mem_categorization.rs @@ -624,7 +624,7 @@ impl<'t,'tcx,TYPER:Typer<'tcx>> MemCategorizationContext<'t,TYPER> { span, &format!("Upvar of non-closure {} - {}", fn_node_id, - ty.repr(self.tcx()))[]); + ty.repr(self.tcx()))); } } } diff --git a/src/librustc/middle/reachable.rs b/src/librustc/middle/reachable.rs index 0af226de251a1..7774314b6e088 100644 --- a/src/librustc/middle/reachable.rs +++ b/src/librustc/middle/reachable.rs @@ -50,7 +50,7 @@ fn generics_require_inlining(generics: &ast::Generics) -> bool { // monomorphized or it was marked with `#[inline]`. This will only return // true for functions. fn item_might_be_inlined(item: &ast::Item) -> bool { - if attributes_specify_inlining(&item.attrs[]) { + if attributes_specify_inlining(&item.attrs) { return true } @@ -65,7 +65,7 @@ fn item_might_be_inlined(item: &ast::Item) -> bool { fn method_might_be_inlined(tcx: &ty::ctxt, method: &ast::Method, impl_src: ast::DefId) -> bool { - if attributes_specify_inlining(&method.attrs[]) || + if attributes_specify_inlining(&method.attrs) || generics_require_inlining(method.pe_generics()) { return true } @@ -202,7 +202,7 @@ impl<'a, 'tcx> ReachableContext<'a, 'tcx> { ast::MethodImplItem(ref method) => { if generics_require_inlining(method.pe_generics()) || attributes_specify_inlining( - &method.attrs[]) { + &method.attrs) { true } else { let impl_did = self.tcx @@ -249,7 +249,7 @@ impl<'a, 'tcx> ReachableContext<'a, 'tcx> { None => { self.tcx.sess.bug(&format!("found unmapped ID in worklist: \ {}", - search_item)[]) + search_item)) } } } @@ -342,7 +342,7 @@ impl<'a, 'tcx> ReachableContext<'a, 'tcx> { .bug(&format!("found unexpected thingy in worklist: {}", self.tcx .map - .node_to_string(search_item))[]) + .node_to_string(search_item))) } } } diff --git a/src/librustc/middle/stability.rs b/src/librustc/middle/stability.rs index 0a0f555f97769..cfa5e5fce3879 100644 --- a/src/librustc/middle/stability.rs +++ b/src/librustc/middle/stability.rs @@ -181,7 +181,7 @@ impl Index { pub fn new(krate: &Crate) -> Index { let mut staged_api = false; for attr in &krate.attrs { - if &attr.name()[] == "staged_api" { + if &attr.name()[..] == "staged_api" { match attr.node.value.node { ast::MetaWord(_) => { attr::mark_used(attr); diff --git a/src/librustc/middle/subst.rs b/src/librustc/middle/subst.rs index 04fd03ab34224..684b28d03739e 100644 --- a/src/librustc/middle/subst.rs +++ b/src/librustc/middle/subst.rs @@ -639,7 +639,7 @@ impl<'a, 'tcx> TypeFolder<'tcx> for SubstFolder<'a, 'tcx> { (space={:?}, index={})", region_name.as_str(), self.root_ty.repr(self.tcx()), - space, i)[]); + space, i)); } } } @@ -696,7 +696,7 @@ impl<'a,'tcx> SubstFolder<'a,'tcx> { p.space, p.idx, self.root_ty.repr(self.tcx()), - self.substs.repr(self.tcx()))[]); + self.substs.repr(self.tcx()))); } }; diff --git a/src/librustc/middle/traits/coherence.rs b/src/librustc/middle/traits/coherence.rs index e199a60c370e3..6d3b910e720a7 100644 --- a/src/librustc/middle/traits/coherence.rs +++ b/src/librustc/middle/traits/coherence.rs @@ -198,7 +198,7 @@ fn ty_is_local_constructor<'tcx>(tcx: &ty::ctxt<'tcx>, ty: Ty<'tcx>) -> bool { ty::ty_err => { tcx.sess.bug( &format!("ty_is_local invoked on unexpected type: {}", - ty.repr(tcx))[]) + ty.repr(tcx))) } } } diff --git a/src/librustc/middle/traits/error_reporting.rs b/src/librustc/middle/traits/error_reporting.rs index 2197cbeb85db4..d2b5b460d1420 100644 --- a/src/librustc/middle/traits/error_reporting.rs +++ b/src/librustc/middle/traits/error_reporting.rs @@ -422,5 +422,5 @@ pub fn suggest_new_overflow_limit(tcx: &ty::ctxt, span: Span) { span, &format!( "consider adding a `#![recursion_limit=\"{}\"]` attribute to your crate", - suggested_limit)[]); + suggested_limit)); } diff --git a/src/librustc/middle/traits/fulfill.rs b/src/librustc/middle/traits/fulfill.rs index a9cac4be3e368..c1066aa899eae 100644 --- a/src/librustc/middle/traits/fulfill.rs +++ b/src/librustc/middle/traits/fulfill.rs @@ -227,7 +227,7 @@ impl<'tcx> FulfillmentContext<'tcx> { } pub fn pending_obligations(&self) -> &[PredicateObligation<'tcx>] { - &self.predicates[] + &self.predicates } /// Attempts to select obligations using `selcx`. If `only_new_obligations` is true, then it diff --git a/src/librustc/middle/traits/select.rs b/src/librustc/middle/traits/select.rs index 0e29892084175..085758b44b5c7 100644 --- a/src/librustc/middle/traits/select.rs +++ b/src/librustc/middle/traits/select.rs @@ -1575,7 +1575,7 @@ impl<'cx, 'tcx> SelectionContext<'cx, 'tcx> { self.tcx().sess.bug( &format!( "asked to assemble builtin bounds of unexpected type: {}", - self_ty.repr(self.tcx()))[]); + self_ty.repr(self.tcx()))); } }; @@ -1727,7 +1727,7 @@ impl<'cx, 'tcx> SelectionContext<'cx, 'tcx> { self.tcx().sess.span_bug( obligation.cause.span, &format!("builtin bound for {} was ambig", - obligation.repr(self.tcx()))[]); + obligation.repr(self.tcx()))); } } } @@ -1995,7 +1995,7 @@ impl<'cx, 'tcx> SelectionContext<'cx, 'tcx> { self.tcx().sess.bug( &format!("Impl {} was matchable against {} but now is not", impl_def_id.repr(self.tcx()), - obligation.repr(self.tcx()))[]); + obligation.repr(self.tcx()))); } } } diff --git a/src/librustc/middle/ty.rs b/src/librustc/middle/ty.rs index e9908397f9703..1206424550f95 100644 --- a/src/librustc/middle/ty.rs +++ b/src/librustc/middle/ty.rs @@ -73,7 +73,6 @@ use std::cell::{Cell, RefCell}; use std::cmp; use std::fmt; use std::hash::{Hash, SipHasher, Hasher}; -#[cfg(stage0)] use std::hash::Writer; use std::mem; use std::ops; use std::rc::Rc; @@ -959,13 +958,6 @@ impl<'tcx> PartialEq for TyS<'tcx> { } impl<'tcx> Eq for TyS<'tcx> {} -#[cfg(stage0)] -impl<'tcx, S: Writer + Hasher> Hash for TyS<'tcx> { - fn hash(&self, s: &mut S) { - (self as *const _).hash(s) - } -} -#[cfg(not(stage0))] impl<'tcx> Hash for TyS<'tcx> { fn hash(&self, s: &mut H) { (self as *const _).hash(s) @@ -988,13 +980,6 @@ impl<'tcx> PartialEq for InternedTy<'tcx> { impl<'tcx> Eq for InternedTy<'tcx> {} -#[cfg(stage0)] -impl<'tcx, S: Writer + Hasher> Hash for InternedTy<'tcx> { - fn hash(&self, s: &mut S) { - self.ty.sty.hash(s) - } -} -#[cfg(not(stage0))] impl<'tcx> Hash for InternedTy<'tcx> { fn hash(&self, s: &mut H) { self.ty.sty.hash(s) @@ -2295,7 +2280,7 @@ impl<'a, 'tcx> ParameterEnvironment<'a, 'tcx> { _ => { cx.sess.bug(&format!("ParameterEnvironment::from_item(): \ `{}` is not an item", - cx.map.node_to_string(id))[]) + cx.map.node_to_string(id))) } } } @@ -2737,7 +2722,7 @@ impl FlagComputation { fn add_fn_sig(&mut self, fn_sig: &PolyFnSig) { let mut computation = FlagComputation::new(); - computation.add_tys(&fn_sig.0.inputs[]); + computation.add_tys(&fn_sig.0.inputs); if let ty::FnConverging(output) = fn_sig.0.output { computation.add_ty(output); @@ -3177,7 +3162,7 @@ pub fn sequence_element_type<'tcx>(cx: &ctxt<'tcx>, ty: Ty<'tcx>) -> Ty<'tcx> { ty_str => mk_mach_uint(cx, ast::TyU8), ty_open(ty) => sequence_element_type(cx, ty), _ => cx.sess.bug(&format!("sequence_element_type called on non-sequence value: {}", - ty_to_string(cx, ty))[]), + ty_to_string(cx, ty))), } } @@ -3538,7 +3523,7 @@ pub fn type_contents<'tcx>(cx: &ctxt<'tcx>, ty: Ty<'tcx>) -> TypeContents { let variants = substd_enum_variants(cx, did, substs); let mut res = TypeContents::union(&variants[..], |variant| { - TypeContents::union(&variant.args[], + TypeContents::union(&variant.args, |arg_ty| { tc_ty(cx, *arg_ty, cache) }) @@ -4121,7 +4106,7 @@ pub fn close_type<'tcx>(cx: &ctxt<'tcx>, ty: Ty<'tcx>) -> Ty<'tcx> { match ty.sty { ty_open(ty) => mk_rptr(cx, cx.mk_region(ReStatic), mt {ty: ty, mutbl:ast::MutImmutable}), _ => cx.sess.bug(&format!("Trying to close a non-open type {}", - ty_to_string(cx, ty))[]) + ty_to_string(cx, ty))) } } @@ -4222,7 +4207,7 @@ pub fn node_id_to_trait_ref<'tcx>(cx: &ctxt<'tcx>, id: ast::NodeId) Some(ty) => ty.clone(), None => cx.sess.bug( &format!("node_id_to_trait_ref: no trait ref for node `{}`", - cx.map.node_to_string(id))[]) + cx.map.node_to_string(id))) } } @@ -4231,7 +4216,7 @@ pub fn node_id_to_type<'tcx>(cx: &ctxt<'tcx>, id: ast::NodeId) -> Ty<'tcx> { Some(ty) => ty, None => cx.sess.bug( &format!("node_id_to_type: no type for node `{}`", - cx.map.node_to_string(id))[]) + cx.map.node_to_string(id))) } } @@ -4305,7 +4290,7 @@ pub fn ty_region(tcx: &ctxt, tcx.sess.span_bug( span, &format!("ty_region() invoked on an inappropriate ty: {:?}", - s)[]); + s)); } } } @@ -4370,11 +4355,11 @@ pub fn expr_span(cx: &ctxt, id: NodeId) -> Span { Some(f) => { cx.sess.bug(&format!("Node id {} is not an expr: {:?}", id, - f)[]); + f)); } None => { cx.sess.bug(&format!("Node id {} is not present \ - in the node map", id)[]); + in the node map", id)); } } } @@ -4390,14 +4375,14 @@ pub fn local_var_name_str(cx: &ctxt, id: NodeId) -> InternedString { cx.sess.bug( &format!("Variable id {} maps to {:?}, not local", id, - pat)[]); + pat)); } } } r => { cx.sess.bug(&format!("Variable id {} maps to {:?}, not local", id, - r)[]); + r)); } } } @@ -4428,7 +4413,7 @@ pub fn adjust_ty<'tcx, F>(cx: &ctxt<'tcx>, cx.sess.bug( &format!("AdjustReifyFnPointer adjustment on non-fn-item: \ {:?}", - b)[]); + b)); } } } @@ -4459,7 +4444,7 @@ pub fn adjust_ty<'tcx, F>(cx: &ctxt<'tcx>, {}", i, ty_to_string(cx, adjusted_ty)) - []); + ); } } } @@ -4522,7 +4507,7 @@ pub fn unsize_ty<'tcx>(cx: &ctxt<'tcx>, } _ => cx.sess.span_bug(span, &format!("UnsizeLength with bad sty: {:?}", - ty_to_string(cx, ty))[]) + ty_to_string(cx, ty))) }, &UnsizeStruct(box ref k, tp_index) => match ty.sty { ty_struct(did, substs) => { @@ -4534,7 +4519,7 @@ pub fn unsize_ty<'tcx>(cx: &ctxt<'tcx>, } _ => cx.sess.span_bug(span, &format!("UnsizeStruct with bad sty: {:?}", - ty_to_string(cx, ty))[]) + ty_to_string(cx, ty))) }, &UnsizeVtable(TyTrait { ref principal, ref bounds }, _) => { mk_trait(cx, principal.clone(), bounds.clone()) @@ -4547,7 +4532,7 @@ pub fn resolve_expr(tcx: &ctxt, expr: &ast::Expr) -> def::Def { Some(&def) => def, None => { tcx.sess.span_bug(expr.span, &format!( - "no def-map entry for expr {}", expr.id)[]); + "no def-map entry for expr {}", expr.id)); } } } @@ -4639,7 +4624,7 @@ pub fn expr_kind(tcx: &ctxt, expr: &ast::Expr) -> ExprKind { expr.span, &format!("uncategorized def for expr {}: {:?}", expr.id, - def)[]); + def)); } } } @@ -4767,7 +4752,7 @@ pub fn field_idx_strict(tcx: &ctxt, name: ast::Name, fields: &[field]) token::get_name(name), fields.iter() .map(|f| token::get_name(f.name).to_string()) - .collect::>())[]); + .collect::>())); } pub fn impl_or_trait_item_idx(id: ast::Name, trait_items: &[ImplOrTraitItem]) @@ -5019,14 +5004,14 @@ pub fn provided_trait_methods<'tcx>(cx: &ctxt<'tcx>, id: ast::DefId) _ => { cx.sess.bug(&format!("provided_trait_methods: `{:?}` is \ not a trait", - id)[]) + id)) } } } _ => { cx.sess.bug(&format!("provided_trait_methods: `{:?}` is not a \ trait", - id)[]) + id)) } } } else { @@ -5262,7 +5247,7 @@ impl<'tcx> VariantInfo<'tcx> { }; }, ast::StructVariantKind(ref struct_def) => { - let fields: &[StructField] = &struct_def.fields[]; + let fields: &[StructField] = &struct_def.fields; assert!(fields.len() > 0); @@ -5624,7 +5609,7 @@ pub fn get_attrs<'tcx>(tcx: &'tcx ctxt, did: DefId) -> CowVec<'tcx, ast::Attribute> { if is_local(did) { let item = tcx.map.expect_item(did.node); - Cow::Borrowed(&item.attrs[]) + Cow::Borrowed(&item.attrs) } else { Cow::Owned(csearch::get_item_attrs(&tcx.sess.cstore, did)) } @@ -5686,7 +5671,7 @@ pub fn lookup_struct_fields(cx: &ctxt, did: ast::DefId) -> Vec { _ => { cx.sess.bug( &format!("ID not mapped to struct fields: {}", - cx.map.node_to_string(did.node))[]); + cx.map.node_to_string(did.node))); } } } else { @@ -5719,7 +5704,7 @@ pub fn struct_fields<'tcx>(cx: &ctxt<'tcx>, did: ast::DefId, substs: &Substs<'tc pub fn tup_fields<'tcx>(v: &[Ty<'tcx>]) -> Vec> { v.iter().enumerate().map(|(i, &f)| { field { - name: token::intern(&i.to_string()[]), + name: token::intern(&i.to_string()), mt: mt { ty: f, mutbl: MutImmutable diff --git a/src/librustc/session/config.rs b/src/librustc/session/config.rs index 93a25de0491fe..efc12d00b10c6 100644 --- a/src/librustc/session/config.rs +++ b/src/librustc/session/config.rs @@ -311,19 +311,19 @@ macro_rules! options { match (value, opt_type_desc) { (Some(..), None) => { early_error(&format!("{} option `{}` takes no \ - value", $outputname, key)[]) + value", $outputname, key)) } (None, Some(type_desc)) => { early_error(&format!("{0} option `{1}` requires \ {2} ({3} {1}=)", $outputname, key, - type_desc, $prefix)[]) + type_desc, $prefix)) } (Some(value), Some(type_desc)) => { early_error(&format!("incorrect value `{}` for {} \ option `{}` - {} was expected", value, $outputname, - key, type_desc)[]) + key, type_desc)) } (None, None) => unreachable!() } @@ -333,7 +333,7 @@ macro_rules! options { } if !found { early_error(&format!("unknown {} option: `{}`", - $outputname, key)[]); + $outputname, key)); } } return op; @@ -590,10 +590,10 @@ pub fn default_lib_output() -> CrateType { pub fn default_configuration(sess: &Session) -> ast::CrateConfig { use syntax::parse::token::intern_and_get_ident as intern; - let end = &sess.target.target.target_endian[]; - let arch = &sess.target.target.arch[]; - let wordsz = &sess.target.target.target_pointer_width[]; - let os = &sess.target.target.target_os[]; + let end = &sess.target.target.target_endian; + let arch = &sess.target.target.arch; + let wordsz = &sess.target.target.target_pointer_width; + let os = &sess.target.target.target_os; let fam = match sess.target.target.options.is_like_windows { true => InternedString::new("windows"), @@ -634,18 +634,18 @@ pub fn build_configuration(sess: &Session) -> ast::CrateConfig { } pub fn build_target_config(opts: &Options, sp: &SpanHandler) -> Config { - let target = match Target::search(&opts.target_triple[]) { + let target = match Target::search(&opts.target_triple) { Ok(t) => t, Err(e) => { sp.handler().fatal(&format!("Error loading target specification: {}", e)); } }; - let (int_type, uint_type) = match &target.target_pointer_width[] { + let (int_type, uint_type) = match &target.target_pointer_width[..] { "32" => (ast::TyI32, ast::TyU32), "64" => (ast::TyI64, ast::TyU64), w => sp.handler().fatal(&format!("target specification was invalid: unrecognized \ - target-pointer-width {}", w)[]) + target-pointer-width {}", w)) }; Config { @@ -863,7 +863,7 @@ pub fn build_session_options(matches: &getopts::Matches) -> Options { "dep-info" => OutputTypeDepInfo, _ => { early_error(&format!("unknown emission type: `{}`", - part)[]) + part)) } }; output_types.push(output_type) @@ -955,7 +955,7 @@ pub fn build_session_options(matches: &getopts::Matches) -> Options { (_, s) => { early_error(&format!("unknown library kind `{}`, expected \ one of dylib, framework, or static", - s)[]); + s)); } }; (name.to_string(), kind) @@ -991,7 +991,7 @@ pub fn build_session_options(matches: &getopts::Matches) -> Options { Some(arg) => { early_error(&format!("argument for --color must be auto, always \ or never (instead was `{}`)", - arg)[]) + arg)) } }; @@ -1111,7 +1111,7 @@ mod test { #[test] fn test_switch_implies_cfg_test() { let matches = - &match getopts(&["--test".to_string()], &optgroups()[]) { + &match getopts(&["--test".to_string()], &optgroups()) { Ok(m) => m, Err(f) => panic!("test_switch_implies_cfg_test: {}", f) }; @@ -1128,7 +1128,7 @@ mod test { fn test_switch_implies_cfg_test_unless_cfg_test() { let matches = &match getopts(&["--test".to_string(), "--cfg=test".to_string()], - &optgroups()[]) { + &optgroups()) { Ok(m) => m, Err(f) => { panic!("test_switch_implies_cfg_test_unless_cfg_test: {}", f) @@ -1148,7 +1148,7 @@ mod test { { let matches = getopts(&[ "-Awarnings".to_string() - ], &optgroups()[]).unwrap(); + ], &optgroups()).unwrap(); let registry = diagnostics::registry::Registry::new(&[]); let sessopts = build_session_options(&matches); let sess = build_session(sessopts, None, registry); @@ -1159,7 +1159,7 @@ mod test { let matches = getopts(&[ "-Awarnings".to_string(), "-Dwarnings".to_string() - ], &optgroups()[]).unwrap(); + ], &optgroups()).unwrap(); let registry = diagnostics::registry::Registry::new(&[]); let sessopts = build_session_options(&matches); let sess = build_session(sessopts, None, registry); @@ -1169,7 +1169,7 @@ mod test { { let matches = getopts(&[ "-Adead_code".to_string() - ], &optgroups()[]).unwrap(); + ], &optgroups()).unwrap(); let registry = diagnostics::registry::Registry::new(&[]); let sessopts = build_session_options(&matches); let sess = build_session(sessopts, None, registry); diff --git a/src/librustc/session/mod.rs b/src/librustc/session/mod.rs index c1c5518887577..932a96e9b9ebd 100644 --- a/src/librustc/session/mod.rs +++ b/src/librustc/session/mod.rs @@ -186,7 +186,7 @@ impl Session { // cases later on pub fn impossible_case(&self, sp: Span, msg: &str) -> ! { self.span_bug(sp, - &format!("impossible case reached: {}", msg)[]); + &format!("impossible case reached: {}", msg)); } pub fn verbose(&self) -> bool { self.opts.debugging_opts.verbose } pub fn time_passes(&self) -> bool { self.opts.debugging_opts.time_passes } @@ -228,7 +228,7 @@ impl Session { } pub fn target_filesearch(&self, kind: PathKind) -> filesearch::FileSearch { filesearch::FileSearch::new(self.sysroot(), - &self.opts.target_triple[], + &self.opts.target_triple, &self.opts.search_paths, kind) } diff --git a/src/librustc/util/common.rs b/src/librustc/util/common.rs index c9d50b9cecf84..a3cc23b7bba83 100644 --- a/src/librustc/util/common.rs +++ b/src/librustc/util/common.rs @@ -14,7 +14,6 @@ use std::cell::{RefCell, Cell}; use std::collections::HashMap; use std::fmt::Debug; use std::hash::Hash; -#[cfg(stage0)] use std::hash::Hasher; use std::iter::repeat; use std::time::Duration; use std::collections::hash_state::HashState; @@ -139,57 +138,13 @@ pub fn block_query

(b: &ast::Block, p: P) -> bool where P: FnMut(&ast::Expr) - /// K: Eq + Hash, V, S, H: Hasher /// -/// Determines whether there exists a path from `source` to `destination`. The graph is defined by -/// the `edges_map`, which maps from a node `S` to a list of its adjacent nodes `T`. +/// Determines whether there exists a path from `source` to `destination`. The +/// graph is defined by the `edges_map`, which maps from a node `S` to a list of +/// its adjacent nodes `T`. /// -/// Efficiency note: This is implemented in an inefficient way because it is typically invoked on -/// very small graphs. If the graphs become larger, a more efficient graph representation and -/// algorithm would probably be advised. -#[cfg(stage0)] -pub fn can_reach(edges_map: &HashMap, S>, source: T, - destination: T) -> bool - where S: HashState, - ::Hasher: Hasher, - T: Hash<::Hasher> + Eq + Clone, -{ - if source == destination { - return true; - } - - // Do a little breadth-first-search here. The `queue` list - // doubles as a way to detect if we've seen a particular FR - // before. Note that we expect this graph to be an *extremely - // shallow* tree. - let mut queue = vec!(source); - let mut i = 0; - while i < queue.len() { - match edges_map.get(&queue[i]) { - Some(edges) => { - for target in edges { - if *target == destination { - return true; - } - - if !queue.iter().any(|x| x == target) { - queue.push((*target).clone()); - } - } - } - None => {} - } - i += 1; - } - return false; -} -/// K: Eq + Hash, V, S, H: Hasher -/// -/// Determines whether there exists a path from `source` to `destination`. The graph is defined by -/// the `edges_map`, which maps from a node `S` to a list of its adjacent nodes `T`. -/// -/// Efficiency note: This is implemented in an inefficient way because it is typically invoked on -/// very small graphs. If the graphs become larger, a more efficient graph representation and -/// algorithm would probably be advised. -#[cfg(not(stage0))] +/// Efficiency note: This is implemented in an inefficient way because it is +/// typically invoked on very small graphs. If the graphs become larger, a more +/// efficient graph representation and algorithm would probably be advised. pub fn can_reach(edges_map: &HashMap, S>, source: T, destination: T) -> bool where S: HashState, T: Hash + Eq + Clone, @@ -250,52 +205,6 @@ pub fn can_reach(edges_map: &HashMap, S>, source: T, /// } /// ``` #[inline(always)] -#[cfg(stage0)] -pub fn memoized(cache: &RefCell>, arg: T, f: F) -> U - where T: Clone + Hash<::Hasher> + Eq, - U: Clone, - S: HashState, - ::Hasher: Hasher, - F: FnOnce(T) -> U, -{ - let key = arg.clone(); - let result = cache.borrow().get(&key).cloned(); - match result { - Some(result) => result, - None => { - let result = f(arg); - cache.borrow_mut().insert(key, result.clone()); - result - } - } -} -/// Memoizes a one-argument closure using the given RefCell containing -/// a type implementing MutableMap to serve as a cache. -/// -/// In the future the signature of this function is expected to be: -/// ``` -/// pub fn memoized>( -/// cache: &RefCell, -/// f: &|T| -> U -/// ) -> impl |T| -> U { -/// ``` -/// but currently it is not possible. -/// -/// # Example -/// ``` -/// struct Context { -/// cache: RefCell> -/// } -/// -/// fn factorial(ctxt: &Context, n: uint) -> uint { -/// memoized(&ctxt.cache, n, |n| match n { -/// 0 | 1 => n, -/// _ => factorial(ctxt, n - 2) + factorial(ctxt, n - 1) -/// }) -/// } -/// ``` -#[inline(always)] -#[cfg(not(stage0))] pub fn memoized(cache: &RefCell>, arg: T, f: F) -> U where T: Clone + Hash + Eq, U: Clone, diff --git a/src/librustc/util/nodemap.rs b/src/librustc/util/nodemap.rs index 1b07ce789e77c..b15da7dab3ee6 100644 --- a/src/librustc/util/nodemap.rs +++ b/src/librustc/util/nodemap.rs @@ -16,7 +16,6 @@ use std::collections::hash_state::{DefaultState}; use std::collections::{HashMap, HashSet}; use std::default::Default; use std::hash::{Hasher, Hash}; -#[cfg(stage0)] use std::hash::Writer; use syntax::ast; pub type FnvHashMap = HashMap>; @@ -28,19 +27,9 @@ pub type DefIdMap = FnvHashMap; pub type NodeSet = FnvHashSet; pub type DefIdSet = FnvHashSet; -#[cfg(stage0)] -pub fn FnvHashMap + Eq, V>() -> FnvHashMap { - Default::default() -} -#[cfg(stage0)] -pub fn FnvHashSet + Eq>() -> FnvHashSet { - Default::default() -} -#[cfg(not(stage0))] pub fn FnvHashMap() -> FnvHashMap { Default::default() } -#[cfg(not(stage0))] pub fn FnvHashSet() -> FnvHashSet { Default::default() } @@ -63,26 +52,6 @@ impl Default for FnvHasher { fn default() -> FnvHasher { FnvHasher(0xcbf29ce484222325) } } -#[cfg(stage0)] -impl Hasher for FnvHasher { - type Output = u64; - fn reset(&mut self) { *self = Default::default(); } - fn finish(&self) -> u64 { self.0 } -} - -#[cfg(stage0)] -impl Writer for FnvHasher { - fn write(&mut self, bytes: &[u8]) { - let FnvHasher(mut hash) = *self; - for byte in bytes { - hash = hash ^ (*byte as u64); - hash = hash * 0x100000001b3; - } - *self = FnvHasher(hash); - } -} - -#[cfg(not(stage0))] impl Hasher for FnvHasher { fn write(&mut self, bytes: &[u8]) { let FnvHasher(mut hash) = *self; diff --git a/src/librustc/util/ppaux.rs b/src/librustc/util/ppaux.rs index 1d46c011bb32e..23b63bc26657c 100644 --- a/src/librustc/util/ppaux.rs +++ b/src/librustc/util/ppaux.rs @@ -29,7 +29,6 @@ use middle::ty_fold::TypeFoldable; use std::collections::HashMap; use std::collections::hash_state::HashState; use std::hash::Hash; -#[cfg(stage0)] use std::hash::Hasher; use std::rc::Rc; use syntax::abi; use syntax::ast_map; @@ -58,12 +57,12 @@ pub fn note_and_explain_region(cx: &ctxt, (ref str, Some(span)) => { cx.sess.span_note( span, - &format!("{}{}{}", prefix, *str, suffix)[]); + &format!("{}{}{}", prefix, *str, suffix)); Some(span) } (ref str, None) => { cx.sess.note( - &format!("{}{}{}", prefix, *str, suffix)[]); + &format!("{}{}{}", prefix, *str, suffix)); None } } @@ -274,7 +273,7 @@ pub fn ty_to_string<'tcx>(cx: &ctxt<'tcx>, typ: &ty::TyS<'tcx>) -> String { }; if abi != abi::Rust { - s.push_str(&format!("extern {} ", abi.to_string())[]); + s.push_str(&format!("extern {} ", abi.to_string())); }; s.push_str("fn"); @@ -330,7 +329,7 @@ pub fn ty_to_string<'tcx>(cx: &ctxt<'tcx>, typ: &ty::TyS<'tcx>) -> String { ty::FnConverging(t) => { if !ty::type_is_nil(t) { s.push_str(" -> "); - s.push_str(&ty_to_string(cx, t)[]); + s.push_str(&ty_to_string(cx, t)); } } ty::FnDiverging => { @@ -367,7 +366,7 @@ pub fn ty_to_string<'tcx>(cx: &ctxt<'tcx>, typ: &ty::TyS<'tcx>) -> String { } ty_rptr(r, ref tm) => { let mut buf = region_ptr_to_string(cx, *r); - buf.push_str(&mt_to_string(cx, tm)[]); + buf.push_str(&mt_to_string(cx, tm)); buf } ty_open(typ) => @@ -561,7 +560,7 @@ pub fn parameterized<'tcx,GG>(cx: &ctxt<'tcx>, } else if strs[0].starts_with("(") && strs[0].ends_with(")") { &strs[0][1 .. strs[0].len() - 1] // Remove '(' and ')' } else { - &strs[0][] + &strs[0][..] }, tail) } else if strs.len() > 0 { @@ -1434,23 +1433,6 @@ impl<'tcx, T:Repr<'tcx>> Repr<'tcx> for ty::Binder { } } -#[cfg(stage0)] -impl<'tcx, S, K, V> Repr<'tcx> for HashMap - where K: Hash<::Hasher> + Eq + Repr<'tcx>, - V: Repr<'tcx>, - S: HashState, - ::Hasher: Hasher, -{ - fn repr(&self, tcx: &ctxt<'tcx>) -> String { - format!("HashMap({})", - self.iter() - .map(|(k,v)| format!("{} => {}", k.repr(tcx), v.repr(tcx))) - .collect::>() - .connect(", ")) - } -} - -#[cfg(not(stage0))] impl<'tcx, S, K, V> Repr<'tcx> for HashMap where K: Hash + Eq + Repr<'tcx>, V: Repr<'tcx>, diff --git a/src/librustc_back/archive.rs b/src/librustc_back/archive.rs index c45ee258342ec..6bf745315eace 100644 --- a/src/librustc_back/archive.rs +++ b/src/librustc_back/archive.rs @@ -73,19 +73,19 @@ fn run_ar(handler: &ErrorHandler, maybe_ar_prog: &Option, Ok(prog) => { let o = prog.wait_with_output().unwrap(); if !o.status.success() { - handler.err(&format!("{:?} failed with: {}", cmd, o.status)[]); + handler.err(&format!("{:?} failed with: {}", cmd, o.status)); handler.note(&format!("stdout ---\n{}", - str::from_utf8(&o.output[]).unwrap())[]); + str::from_utf8(&o.output).unwrap())); handler.note(&format!("stderr ---\n{}", - str::from_utf8(&o.error[]).unwrap()) - []); + str::from_utf8(&o.error).unwrap()) + ); handler.abort_if_errors(); } o }, Err(e) => { handler.err(&format!("could not exec `{}`: {}", &ar[..], - e)[]); + e)); handler.abort_if_errors(); panic!("rustc::back::archive::run_ar() should not reach this point"); } @@ -110,7 +110,7 @@ pub fn find_library(name: &str, osprefix: &str, ossuffix: &str, } handler.fatal(&format!("could not find native static library `{}`, \ perhaps an -L flag is missing?", - name)[]); + name)); } impl<'a> Archive<'a> { @@ -142,7 +142,7 @@ impl<'a> Archive<'a> { /// Lists all files in an archive pub fn files(&self) -> Vec { let output = run_ar(self.handler, &self.maybe_ar_prog, "t", None, &[&self.dst]); - let output = str::from_utf8(&output.output[]).unwrap(); + let output = str::from_utf8(&output.output).unwrap(); // use lines_any because windows delimits output with `\r\n` instead of // just `\n` output.lines_any().map(|s| s.to_string()).collect() @@ -174,9 +174,9 @@ impl<'a> ArchiveBuilder<'a> { /// search in the relevant locations for a library named `name`. pub fn add_native_library(&mut self, name: &str) -> old_io::IoResult<()> { let location = find_library(name, - &self.archive.slib_prefix[], - &self.archive.slib_suffix[], - &self.archive.lib_search_paths[], + &self.archive.slib_prefix, + &self.archive.slib_suffix, + &self.archive.lib_search_paths, self.archive.handler); self.add_archive(&location, name, |_| false) } diff --git a/src/librustc_back/rpath.rs b/src/librustc_back/rpath.rs index 80eb39b1ec8a2..3f2dcee511095 100644 --- a/src/librustc_back/rpath.rs +++ b/src/librustc_back/rpath.rs @@ -49,7 +49,7 @@ pub fn get_rpath_flags(config: RPathConfig) -> Vec where fn rpaths_to_flags(rpaths: &[String]) -> Vec { let mut ret = Vec::new(); for rpath in rpaths { - ret.push(format!("-Wl,-rpath,{}", &(*rpath)[])); + ret.push(format!("-Wl,-rpath,{}", &(*rpath))); } return ret; } diff --git a/src/librustc_back/svh.rs b/src/librustc_back/svh.rs index c33d10bfbab39..2fc43ab26b58e 100644 --- a/src/librustc_back/svh.rs +++ b/src/librustc_back/svh.rs @@ -64,7 +64,7 @@ impl Svh { } pub fn as_str<'a>(&'a self) -> &'a str { - &self.hash[] + &self.hash } pub fn calculate(metadata: &Vec, krate: &ast::Crate) -> Svh { @@ -329,7 +329,7 @@ mod svh_visitor { // macro invocations, namely macro_rules definitions, // *can* appear as items, even in the expanded crate AST. - if ¯o_name(mac)[] == "macro_rules" { + if ¯o_name(mac)[..] == "macro_rules" { // Pretty-printing definition to a string strips out // surface artifacts (currently), such as the span // information, yielding a content-based hash. @@ -356,7 +356,7 @@ mod svh_visitor { fn macro_name(mac: &Mac) -> token::InternedString { match &mac.node { &MacInvocTT(ref path, ref _tts, ref _stx_ctxt) => { - let s = &path.segments[]; + let s = &path.segments; assert_eq!(s.len(), 1); content(s[0].identifier) } diff --git a/src/librustc_back/target/mod.rs b/src/librustc_back/target/mod.rs index 70ae613763858..b04c07977c369 100644 --- a/src/librustc_back/target/mod.rs +++ b/src/librustc_back/target/mod.rs @@ -239,7 +239,7 @@ impl Target { .and_then(|os| os.map(|s| s.to_string())) { Some(val) => val, None => - handler.fatal(&format!("Field {} in target specification is required", name)[]) + handler.fatal(&format!("Field {} in target specification is required", name)) } }; diff --git a/src/librustc_borrowck/borrowck/check_loans.rs b/src/librustc_borrowck/borrowck/check_loans.rs index abe01d193b492..23ca5b636815b 100644 --- a/src/librustc_borrowck/borrowck/check_loans.rs +++ b/src/librustc_borrowck/borrowck/check_loans.rs @@ -466,7 +466,7 @@ impl<'a, 'tcx> CheckLoanCtxt<'a, 'tcx> { new_loan.span, &format!("cannot borrow `{}`{} as mutable \ more than once at a time", - nl, new_loan_msg)[]) + nl, new_loan_msg)) } (ty::UniqueImmBorrow, _) => { @@ -474,7 +474,7 @@ impl<'a, 'tcx> CheckLoanCtxt<'a, 'tcx> { new_loan.span, &format!("closure requires unique access to `{}` \ but {} is already borrowed{}", - nl, ol_pronoun, old_loan_msg)[]); + nl, ol_pronoun, old_loan_msg)); } (_, ty::UniqueImmBorrow) => { @@ -482,7 +482,7 @@ impl<'a, 'tcx> CheckLoanCtxt<'a, 'tcx> { new_loan.span, &format!("cannot borrow `{}`{} as {} because \ previous closure requires unique access", - nl, new_loan_msg, new_loan.kind.to_user_str())[]); + nl, new_loan_msg, new_loan.kind.to_user_str())); } (_, _) => { @@ -495,7 +495,7 @@ impl<'a, 'tcx> CheckLoanCtxt<'a, 'tcx> { new_loan.kind.to_user_str(), ol_pronoun, old_loan.kind.to_user_str(), - old_loan_msg)[]); + old_loan_msg)); } } @@ -504,7 +504,7 @@ impl<'a, 'tcx> CheckLoanCtxt<'a, 'tcx> { self.bccx.span_note( span, &format!("borrow occurs due to use of `{}` in closure", - nl)[]); + nl)); } _ => { } } @@ -553,7 +553,7 @@ impl<'a, 'tcx> CheckLoanCtxt<'a, 'tcx> { self.bccx.span_note( old_loan.span, - &format!("{}; {}", borrow_summary, rule_summary)[]); + &format!("{}; {}", borrow_summary, rule_summary)); let old_loan_span = self.tcx().map.span(old_loan.kill_scope.node_id()); self.bccx.span_end_note(old_loan_span, @@ -623,13 +623,13 @@ impl<'a, 'tcx> CheckLoanCtxt<'a, 'tcx> { self.bccx.span_err( span, &format!("cannot use `{}` because it was mutably borrowed", - &self.bccx.loan_path_to_string(copy_path)[]) - []); + &self.bccx.loan_path_to_string(copy_path)) + ); self.bccx.span_note( loan_span, &format!("borrow of `{}` occurs here", - &self.bccx.loan_path_to_string(&*loan_path)[]) - []); + &self.bccx.loan_path_to_string(&*loan_path)) + ); } } } @@ -648,20 +648,20 @@ impl<'a, 'tcx> CheckLoanCtxt<'a, 'tcx> { let err_message = match move_kind { move_data::Captured => format!("cannot move `{}` into closure because it is borrowed", - &self.bccx.loan_path_to_string(move_path)[]), + &self.bccx.loan_path_to_string(move_path)), move_data::Declared | move_data::MoveExpr | move_data::MovePat => format!("cannot move out of `{}` because it is borrowed", - &self.bccx.loan_path_to_string(move_path)[]) + &self.bccx.loan_path_to_string(move_path)) }; self.bccx.span_err(span, &err_message[..]); self.bccx.span_note( loan_span, &format!("borrow of `{}` occurs here", - &self.bccx.loan_path_to_string(&*loan_path)[]) - []); + &self.bccx.loan_path_to_string(&*loan_path)) + ); } } } @@ -842,7 +842,7 @@ impl<'a, 'tcx> CheckLoanCtxt<'a, 'tcx> { self.bccx.span_err( assignment_span, &format!("cannot assign to {}", - self.bccx.cmt_to_string(&*assignee_cmt))[]); + self.bccx.cmt_to_string(&*assignee_cmt))); self.bccx.span_help( self.tcx().map.span(upvar_id.closure_expr_id), "consider changing this closure to take self by mutable reference"); @@ -851,7 +851,7 @@ impl<'a, 'tcx> CheckLoanCtxt<'a, 'tcx> { assignment_span, &format!("cannot assign to {} {}", assignee_cmt.mutbl.to_user_str(), - self.bccx.cmt_to_string(&*assignee_cmt))[]); + self.bccx.cmt_to_string(&*assignee_cmt))); } } _ => match opt_loan_path(&assignee_cmt) { @@ -861,14 +861,14 @@ impl<'a, 'tcx> CheckLoanCtxt<'a, 'tcx> { &format!("cannot assign to {} {} `{}`", assignee_cmt.mutbl.to_user_str(), self.bccx.cmt_to_string(&*assignee_cmt), - self.bccx.loan_path_to_string(&*lp))[]); + self.bccx.loan_path_to_string(&*lp))); } None => { self.bccx.span_err( assignment_span, &format!("cannot assign to {} {}", assignee_cmt.mutbl.to_user_str(), - self.bccx.cmt_to_string(&*assignee_cmt))[]); + self.bccx.cmt_to_string(&*assignee_cmt))); } } } @@ -988,10 +988,10 @@ impl<'a, 'tcx> CheckLoanCtxt<'a, 'tcx> { self.bccx.span_err( span, &format!("cannot assign to `{}` because it is borrowed", - self.bccx.loan_path_to_string(loan_path))[]); + self.bccx.loan_path_to_string(loan_path))); self.bccx.span_note( loan.span, &format!("borrow of `{}` occurs here", - self.bccx.loan_path_to_string(loan_path))[]); + self.bccx.loan_path_to_string(loan_path))); } } diff --git a/src/librustc_borrowck/borrowck/fragments.rs b/src/librustc_borrowck/borrowck/fragments.rs index c873831cb0f65..41ccee4f8fbf0 100644 --- a/src/librustc_borrowck/borrowck/fragments.rs +++ b/src/librustc_borrowck/borrowck/fragments.rs @@ -123,12 +123,12 @@ pub fn instrument_move_fragments<'tcx>(this: &MoveData<'tcx>, let attrs : &[ast::Attribute]; attrs = match tcx.map.find(id) { Some(ast_map::NodeItem(ref item)) => - &item.attrs[], + &item.attrs, Some(ast_map::NodeImplItem(&ast::MethodImplItem(ref m))) => - &m.attrs[], + &m.attrs, Some(ast_map::NodeTraitItem(&ast::ProvidedMethod(ref m))) => - &m.attrs[], - _ => &[][], + &m.attrs, + _ => &[], }; let span_err = @@ -144,7 +144,7 @@ pub fn instrument_move_fragments<'tcx>(this: &MoveData<'tcx>, for (i, mpi) in vec_rc.iter().enumerate() { let render = || this.path_loan_path(*mpi).user_string(tcx); if span_err { - tcx.sess.span_err(sp, &format!("{}: `{}`", kind, render())[]); + tcx.sess.span_err(sp, &format!("{}: `{}`", kind, render())); } if print { println!("id:{} {}[{}] `{}`", id, kind, i, render()); @@ -156,7 +156,7 @@ pub fn instrument_move_fragments<'tcx>(this: &MoveData<'tcx>, for (i, f) in vec_rc.iter().enumerate() { let render = || f.loan_path_user_string(this, tcx); if span_err { - tcx.sess.span_err(sp, &format!("{}: `{}`", kind, render())[]); + tcx.sess.span_err(sp, &format!("{}: `{}`", kind, render())); } if print { println!("id:{} {}[{}] `{}`", id, kind, i, render()); diff --git a/src/librustc_borrowck/borrowck/gather_loans/mod.rs b/src/librustc_borrowck/borrowck/gather_loans/mod.rs index 4e308c5809f45..333aef81390bf 100644 --- a/src/librustc_borrowck/borrowck/gather_loans/mod.rs +++ b/src/librustc_borrowck/borrowck/gather_loans/mod.rs @@ -307,7 +307,7 @@ impl<'a, 'tcx> GatherLoanCtxt<'a, 'tcx> { self.tcx().sess.span_bug( cmt.span, &format!("invalid borrow lifetime: {:?}", - loan_region)[]); + loan_region)); } }; debug!("loan_scope = {:?}", loan_scope); diff --git a/src/librustc_borrowck/borrowck/gather_loans/move_error.rs b/src/librustc_borrowck/borrowck/gather_loans/move_error.rs index da5c847a04607..53761eb14713d 100644 --- a/src/librustc_borrowck/borrowck/gather_loans/move_error.rs +++ b/src/librustc_borrowck/borrowck/gather_loans/move_error.rs @@ -121,7 +121,7 @@ fn report_cannot_move_out_of<'a, 'tcx>(bccx: &BorrowckCtxt<'a, 'tcx>, mc::cat_static_item => { bccx.span_err(move_from.span, &format!("cannot move out of {}", - move_from.descriptive_string(bccx.tcx))[]); + move_from.descriptive_string(bccx.tcx))); } mc::cat_interior(ref b, mc::InteriorElement(Kind::Index, _)) => { @@ -130,7 +130,7 @@ fn report_cannot_move_out_of<'a, 'tcx>(bccx: &BorrowckCtxt<'a, 'tcx>, bccx.span_err(move_from.span, &format!("cannot move out of type `{}`, \ a non-copy fixed-size array", - b.ty.user_string(bccx.tcx))[]); + b.ty.user_string(bccx.tcx))); } } @@ -143,7 +143,7 @@ fn report_cannot_move_out_of<'a, 'tcx>(bccx: &BorrowckCtxt<'a, 'tcx>, move_from.span, &format!("cannot move out of type `{}`, \ which defines the `Drop` trait", - b.ty.user_string(bccx.tcx))[]); + b.ty.user_string(bccx.tcx))); }, _ => { bccx.span_bug(move_from.span, "this path should not cause illegal move") @@ -170,10 +170,10 @@ fn note_move_destination(bccx: &BorrowckCtxt, &format!("to prevent the move, \ use `ref {0}` or `ref mut {0}` to capture value by \ reference", - pat_name)[]); + pat_name)); } else { bccx.span_note(move_to_span, &format!("and here (use `ref {0}` or `ref mut {0}`)", - pat_name)[]); + pat_name)); } } diff --git a/src/librustc_borrowck/borrowck/mod.rs b/src/librustc_borrowck/borrowck/mod.rs index 518e4bc472ca4..dfd98881ace86 100644 --- a/src/librustc_borrowck/borrowck/mod.rs +++ b/src/librustc_borrowck/borrowck/mod.rs @@ -524,7 +524,7 @@ impl<'a, 'tcx> BorrowckCtxt<'a, 'tcx> { pub fn report(&self, err: BckError<'tcx>) { self.span_err( err.span, - &self.bckerr_to_string(&err)[]); + &self.bckerr_to_string(&err)); self.note_and_explain_bckerr(err); } @@ -546,7 +546,7 @@ impl<'a, 'tcx> BorrowckCtxt<'a, 'tcx> { use_span, &format!("{} of possibly uninitialized variable: `{}`", verb, - self.loan_path_to_string(lp))[]); + self.loan_path_to_string(lp))); (self.loan_path_to_string(moved_lp), String::new()) } @@ -588,7 +588,7 @@ impl<'a, 'tcx> BorrowckCtxt<'a, 'tcx> { &format!("{} of {}moved value: `{}`", verb, msg, - nl)[]); + nl)); (ol, moved_lp_msg) } }; @@ -607,7 +607,7 @@ impl<'a, 'tcx> BorrowckCtxt<'a, 'tcx> { self.tcx.sess.bug(&format!("MoveExpr({}) maps to \ {:?}, not Expr", the_move.id, - r)[]) + r)) } }; let (suggestion, _) = @@ -618,7 +618,7 @@ impl<'a, 'tcx> BorrowckCtxt<'a, 'tcx> { ol, moved_lp_msg, expr_ty.user_string(self.tcx), - suggestion)[]); + suggestion)); } move_data::MovePat => { @@ -629,7 +629,7 @@ impl<'a, 'tcx> BorrowckCtxt<'a, 'tcx> { which is moved by default", ol, moved_lp_msg, - pat_ty.user_string(self.tcx))[]); + pat_ty.user_string(self.tcx))); self.tcx.sess.span_help(span, "use `ref` to override"); } @@ -645,7 +645,7 @@ impl<'a, 'tcx> BorrowckCtxt<'a, 'tcx> { self.tcx.sess.bug(&format!("Captured({}) maps to \ {:?}, not Expr", the_move.id, - r)[]) + r)) } }; let (suggestion, help) = @@ -661,7 +661,7 @@ impl<'a, 'tcx> BorrowckCtxt<'a, 'tcx> { ol, moved_lp_msg, expr_ty.user_string(self.tcx), - suggestion)[]); + suggestion)); self.tcx.sess.span_help(expr_span, help); } } @@ -704,7 +704,7 @@ impl<'a, 'tcx> BorrowckCtxt<'a, 'tcx> { self.tcx.sess.span_err( span, &format!("re-assignment of immutable variable `{}`", - self.loan_path_to_string(lp))[]); + self.loan_path_to_string(lp))); self.tcx.sess.span_note(assign.span, "prior assignment occurs here"); } @@ -825,7 +825,7 @@ impl<'a, 'tcx> BorrowckCtxt<'a, 'tcx> { self.tcx.sess.span_err( span, &format!("{} in an aliasable location", - prefix)[]); + prefix)); } mc::AliasableClosure(id) => { self.tcx.sess.span_err(span, @@ -847,12 +847,12 @@ impl<'a, 'tcx> BorrowckCtxt<'a, 'tcx> { mc::AliasableStaticMut(..) => { self.tcx.sess.span_err( span, - &format!("{} in a static location", prefix)[]); + &format!("{} in a static location", prefix)); } mc::AliasableBorrowed => { self.tcx.sess.span_err( span, - &format!("{} in a `&` reference", prefix)[]); + &format!("{} in a `&` reference", prefix)); } } @@ -920,12 +920,12 @@ impl<'a, 'tcx> BorrowckCtxt<'a, 'tcx> { note_and_explain_region( self.tcx, &format!("{} would have to be valid for ", - descr)[], + descr), loan_scope, "..."); note_and_explain_region( self.tcx, - &format!("...but {} is only valid for ", descr)[], + &format!("...but {} is only valid for ", descr), ptr_scope, ""); } @@ -945,7 +945,7 @@ impl<'a, 'tcx> BorrowckCtxt<'a, 'tcx> { out.push('('); self.append_loan_path_to_string(&**lp_base, out); out.push_str(DOWNCAST_PRINTED_OPERATOR); - out.push_str(&ty::item_path_str(self.tcx, variant_def_id)[]); + out.push_str(&ty::item_path_str(self.tcx, variant_def_id)); out.push(')'); } @@ -959,7 +959,7 @@ impl<'a, 'tcx> BorrowckCtxt<'a, 'tcx> { } mc::PositionalField(idx) => { out.push('.'); - out.push_str(&idx.to_string()[]); + out.push_str(&idx.to_string()); } } } @@ -991,7 +991,7 @@ impl<'a, 'tcx> BorrowckCtxt<'a, 'tcx> { out.push('('); self.append_autoderefd_loan_path_to_string(&**lp_base, out); out.push(':'); - out.push_str(&ty::item_path_str(self.tcx, variant_def_id)[]); + out.push_str(&ty::item_path_str(self.tcx, variant_def_id)); out.push(')'); } diff --git a/src/librustc_borrowck/graphviz.rs b/src/librustc_borrowck/graphviz.rs index 39c9d9ba6ad24..4465000d8d81e 100644 --- a/src/librustc_borrowck/graphviz.rs +++ b/src/librustc_borrowck/graphviz.rs @@ -60,7 +60,7 @@ impl<'a, 'tcx> DataflowLabeller<'a, 'tcx> { if seen_one { sets.push_str(" "); } else { seen_one = true; } sets.push_str(variant.short_name()); sets.push_str(": "); - sets.push_str(&self.dataflow_for_variant(e, n, variant)[]); + sets.push_str(&self.dataflow_for_variant(e, n, variant)); } sets } diff --git a/src/librustc_driver/driver.rs b/src/librustc_driver/driver.rs index a260997f60594..b12f05d7c50f7 100644 --- a/src/librustc_driver/driver.rs +++ b/src/librustc_driver/driver.rs @@ -77,10 +77,10 @@ pub fn compile_input(sess: Session, let outputs = build_output_filenames(input, outdir, output, - &krate.attrs[], + &krate.attrs, &sess); let id = link::find_crate_name(Some(&sess), - &krate.attrs[], + &krate.attrs, input); let expanded_crate = match phase_2_configure_and_expand(&sess, @@ -112,6 +112,7 @@ pub fn compile_input(sess: Session, &sess, outdir, &ast_map, + &ast_map.krate(), &id[..])); let analysis = phase_3_run_analysis_passes(sess, @@ -287,11 +288,13 @@ impl<'a, 'ast, 'tcx> CompileState<'a, 'ast, 'tcx> { session: &'a Session, out_dir: &'a Option, ast_map: &'a ast_map::Map<'ast>, + expanded_crate: &'a ast::Crate, crate_name: &'a str) -> CompileState<'a, 'ast, 'tcx> { CompileState { crate_name: Some(crate_name), ast_map: Some(ast_map), + expanded_crate: Some(expanded_crate), .. CompileState::empty(input, session, out_dir) } } @@ -299,14 +302,14 @@ impl<'a, 'ast, 'tcx> CompileState<'a, 'ast, 'tcx> { fn state_after_analysis(input: &'a Input, session: &'a Session, out_dir: &'a Option, - krate: &'a ast::Crate, + expanded_crate: &'a ast::Crate, analysis: &'a ty::CrateAnalysis<'tcx>, tcx: &'a ty::ctxt<'tcx>) -> CompileState<'a, 'ast, 'tcx> { CompileState { analysis: Some(analysis), tcx: Some(tcx), - krate: Some(krate), + expanded_crate: Some(expanded_crate), .. CompileState::empty(input, session, out_dir) } } @@ -375,9 +378,9 @@ pub fn phase_2_configure_and_expand(sess: &Session, let time_passes = sess.time_passes(); *sess.crate_types.borrow_mut() = - collect_crate_types(sess, &krate.attrs[]); + collect_crate_types(sess, &krate.attrs); *sess.crate_metadata.borrow_mut() = - collect_crate_metadata(sess, &krate.attrs[]); + collect_crate_metadata(sess, &krate.attrs); time(time_passes, "recursion limit", (), |_| { middle::recursion_limit::update_recursion_limit(sess, &krate); @@ -721,7 +724,7 @@ pub fn phase_5_run_llvm_passes(sess: &Session, time(sess.time_passes(), "LLVM passes", (), |_| write::run_passes(sess, trans, - &sess.opts.output_types[], + &sess.opts.output_types, outputs)); } @@ -742,7 +745,7 @@ pub fn phase_6_link_output(sess: &Session, link::link_binary(sess, trans, outputs, - &trans.link.crate_name[])); + &trans.link.crate_name)); env::set_var("PATH", &old_path); } @@ -796,7 +799,7 @@ fn write_out_deps(sess: &Session, // write Makefile-compatible dependency rules let files: Vec = sess.codemap().files.borrow() .iter().filter(|fmap| fmap.is_real_file()) - .map(|fmap| escape_dep_filename(&fmap.name[])) + .map(|fmap| escape_dep_filename(&fmap.name)) .collect(); let mut file = try!(old_io::File::create(&deps_filename)); for path in &out_filenames { @@ -810,7 +813,7 @@ fn write_out_deps(sess: &Session, Ok(()) => {} Err(e) => { sess.fatal(&format!("error writing dependencies to `{}`: {}", - deps_filename.display(), e)[]); + deps_filename.display(), e)); } } } @@ -881,7 +884,7 @@ pub fn collect_crate_types(session: &Session, if !res { session.warn(&format!("dropping unsupported crate type `{}` \ for target `{}`", - *crate_type, session.opts.target_triple)[]); + *crate_type, session.opts.target_triple)); } res diff --git a/src/librustc_driver/lib.rs b/src/librustc_driver/lib.rs index 2550432c8101a..0fbfeb831850f 100644 --- a/src/librustc_driver/lib.rs +++ b/src/librustc_driver/lib.rs @@ -124,7 +124,7 @@ pub fn run_compiler<'a>(args: &[String], let sopts = config::build_session_options(&matches); let (odir, ofile) = make_output(&matches); - let (input, input_file_path) = match make_input(&matches.free[]) { + let (input, input_file_path) = match make_input(&matches.free) { Some((input, input_file_path)) => callbacks.some_input(input, input_file_path), None => match callbacks.no_input(&matches, &sopts, &odir, &ofile, &descriptions) { Some((input, input_file_path)) => (input, input_file_path), @@ -166,7 +166,7 @@ fn make_output(matches: &getopts::Matches) -> (Option, Option) { // Extract input (string or file and optional path) from matches. fn make_input(free_matches: &[String]) -> Option<(Input, Option)> { if free_matches.len() == 1 { - let ifile = &free_matches[0][]; + let ifile = &free_matches[0][..]; if ifile == "-" { let contents = old_io::stdin().read_to_end().unwrap(); let src = String::from_utf8(contents).unwrap(); @@ -277,7 +277,7 @@ impl<'a> CompilerCalls<'a> for RustcDefaultCalls { println!("{}", description); } None => { - early_error(&format!("no extended information for {}", code)[]); + early_error(&format!("no extended information for {}", code)); } } return Compilation::Stop; @@ -373,11 +373,13 @@ impl<'a> CompilerCalls<'a> for RustcDefaultCalls { if sess.opts.debugging_opts.save_analysis { control.after_analysis.callback = box |state| { - time(state.session.time_passes(), "save analysis", state.krate.unwrap(), |krate| - save::process_crate(state.session, - krate, - state.analysis.unwrap(), - state.out_dir)); + time(state.session.time_passes(), + "save analysis", + state.expanded_crate.unwrap(), + |krate| save::process_crate(state.session, + krate, + state.analysis.unwrap(), + state.out_dir)); }; control.make_glob_map = resolve::MakeGlobMap::Yes; } @@ -678,7 +680,7 @@ pub fn handle_options(mut args: Vec) -> Option { } let matches = - match getopts::getopts(&args[..], &config::optgroups()[]) { + match getopts::getopts(&args[..], &config::optgroups()) { Ok(m) => m, Err(f_stable_attempt) => { // redo option parsing, including unstable options this time, @@ -811,7 +813,7 @@ pub fn monitor(f: F) { Err(e) => { emitter.emit(None, &format!("failed to read internal \ - stderr: {}", e)[], + stderr: {}", e), None, diagnostic::Error) } diff --git a/src/librustc_driver/pretty.rs b/src/librustc_driver/pretty.rs index 0fbfa5fd89dd7..3f9fdd28e4405 100644 --- a/src/librustc_driver/pretty.rs +++ b/src/librustc_driver/pretty.rs @@ -312,7 +312,7 @@ impl<'tcx> pprust::PpAnn for TypedAnnotation<'tcx> { try!(pp::word(&mut s.s, &ppaux::ty_to_string( tcx, - ty::expr_ty(tcx, expr))[])); + ty::expr_ty(tcx, expr)))); s.pclose() } _ => Ok(()) @@ -602,7 +602,7 @@ pub fn pretty_print_input(sess: Session, debug!("pretty printing flow graph for {:?}", opt_uii); let uii = opt_uii.unwrap_or_else(|| { sess.fatal(&format!("`pretty flowgraph=..` needs NodeId (int) or - unique path suffix (b::c::d)")[]) + unique path suffix (b::c::d)")) }); let ast_map = ast_map.expect("--pretty flowgraph missing ast_map"); @@ -610,7 +610,7 @@ pub fn pretty_print_input(sess: Session, let node = ast_map.find(nodeid).unwrap_or_else(|| { sess.fatal(&format!("--pretty flowgraph couldn't find id: {}", - nodeid)[]) + nodeid)) }); let code = blocks::Code::from_node(node); diff --git a/src/librustc_privacy/lib.rs b/src/librustc_privacy/lib.rs index 5662a74a53d34..eae02e0bf66c4 100644 --- a/src/librustc_privacy/lib.rs +++ b/src/librustc_privacy/lib.rs @@ -712,7 +712,7 @@ impl<'a, 'tcx> PrivacyVisitor<'a, 'tcx> { method_id, None, &format!("method `{}`", - string)[])); + string))); } // Checks that a path is in scope. @@ -727,7 +727,7 @@ impl<'a, 'tcx> PrivacyVisitor<'a, 'tcx> { self.ensure_public(span, def, Some(origdid), - &format!("{} `{}`", tyname, name)[]) + &format!("{} `{}`", tyname, name)) }; match self.last_private_map[path_id] { diff --git a/src/librustc_resolve/build_reduced_graph.rs b/src/librustc_resolve/build_reduced_graph.rs index 2f25f34a92ad0..3b3106af818b3 100644 --- a/src/librustc_resolve/build_reduced_graph.rs +++ b/src/librustc_resolve/build_reduced_graph.rs @@ -220,14 +220,14 @@ impl<'a, 'b:'a, 'tcx:'b> GraphBuilder<'a, 'b, 'tcx> { self.resolve_error(sp, &format!("duplicate definition of {} `{}`", namespace_error_to_string(duplicate_type), - token::get_name(name))[]); + token::get_name(name))); { let r = child.span_for_namespace(ns); if let Some(sp) = r { self.session.span_note(sp, &format!("first definition of {} `{}` here", namespace_error_to_string(duplicate_type), - token::get_name(name))[]); + token::get_name(name))); } } } @@ -307,8 +307,8 @@ impl<'a, 'b:'a, 'tcx:'b> GraphBuilder<'a, 'b, 'tcx> { ViewPathSimple(binding, ref full_path) => { let source_name = full_path.segments.last().unwrap().identifier.name; - if &token::get_name(source_name)[] == "mod" || - &token::get_name(source_name)[] == "self" { + if &token::get_name(source_name)[..] == "mod" || + &token::get_name(source_name)[..] == "self" { self.resolve_error(view_path.span, "`self` imports are only allowed within a { } list"); } @@ -1192,7 +1192,7 @@ impl<'a, 'b:'a, 'tcx:'b> GraphBuilder<'a, 'b, 'tcx> { debug!("(building import directive) building import \ directive: {}::{}", self.names_to_string(&module_.imports.borrow().last().unwrap(). - module_path[]), + module_path), token::get_name(target)); let mut import_resolutions = module_.import_resolutions diff --git a/src/librustc_resolve/lib.rs b/src/librustc_resolve/lib.rs index 333d32d76b6d5..062ea885bf40a 100644 --- a/src/librustc_resolve/lib.rs +++ b/src/librustc_resolve/lib.rs @@ -1068,7 +1068,7 @@ impl<'a, 'tcx> Resolver<'a, 'tcx> { }; let msg = format!("unresolved import `{}`{}", self.import_path_to_string( - &import_directive.module_path[], + &import_directive.module_path, import_directive.subclass), help); self.resolve_error(span, &msg[..]); @@ -2247,7 +2247,7 @@ impl<'a, 'tcx> Resolver<'a, 'tcx> { true) { Failed(Some((span, msg))) => self.resolve_error(span, &format!("failed to resolve. {}", - msg)[]), + msg)), Failed(None) => (), // Continue up the search chain. Indeterminate => { // We couldn't see through the higher scope because of an @@ -2603,7 +2603,7 @@ impl<'a, 'tcx> Resolver<'a, 'tcx> { match def_like { DlDef(d @ DefUpvar(..)) => { self.session.span_bug(span, - &format!("unexpected {:?} in bindings", d)[]) + &format!("unexpected {:?} in bindings", d)) } DlDef(d @ DefLocal(_)) => { let node_id = d.def_id().node; @@ -2931,7 +2931,7 @@ impl<'a, 'tcx> Resolver<'a, 'tcx> { self.resolve_struct(item.id, generics, - &struct_def.fields[]); + &struct_def.fields); } ItemMod(ref module_) => { @@ -3019,7 +3019,7 @@ impl<'a, 'tcx> Resolver<'a, 'tcx> { parameter in this type \ parameter list", token::get_name( - name))[]) + name))) } seen_bindings.insert(name); @@ -3204,14 +3204,14 @@ impl<'a, 'tcx> Resolver<'a, 'tcx> { self.resolve_error(trait_reference.path.span, &format!("`{}` is not a trait", self.path_names_to_string( - &trait_reference.path))[]); + &trait_reference.path))); // If it's a typedef, give a note if let DefTy(..) = def { self.session.span_note( trait_reference.path.span, &format!("`type` aliases cannot be used for traits") - []); + ); } } } @@ -3408,7 +3408,7 @@ impl<'a, 'tcx> Resolver<'a, 'tcx> { self.resolve_error(span, &format!("method `{}` is not a member of trait `{}`", token::get_name(name), - path_str)[]); + path_str)); } } } @@ -3477,7 +3477,7 @@ impl<'a, 'tcx> Resolver<'a, 'tcx> { &format!("variable `{}` from pattern #1 is \ not bound in pattern #{}", token::get_name(key), - i + 1)[]); + i + 1)); } Some(binding_i) => { if binding_0.binding_mode != binding_i.binding_mode { @@ -3486,7 +3486,7 @@ impl<'a, 'tcx> Resolver<'a, 'tcx> { &format!("variable `{}` is bound with different \ mode in pattern #{} than in pattern #1", token::get_name(key), - i + 1)[]); + i + 1)); } } } @@ -3499,7 +3499,7 @@ impl<'a, 'tcx> Resolver<'a, 'tcx> { &format!("variable `{}` from pattern {}{} is \ not bound in pattern {}1", token::get_name(key), - "#", i + 1, "#")[]); + "#", i + 1, "#")); } } } @@ -3698,7 +3698,7 @@ impl<'a, 'tcx> Resolver<'a, 'tcx> { &format!("declaration of `{}` shadows an enum \ variant or unit-like struct in \ scope", - token::get_name(renamed))[]); + token::get_name(renamed))); } FoundConst(ref def, lp) if mode == RefutableMode => { debug!("(resolving pattern) resolving `{}` to \ @@ -3750,7 +3750,7 @@ impl<'a, 'tcx> Resolver<'a, 'tcx> { list", token::get_ident( ident)) - []) + ) } else if bindings_list.get(&renamed) == Some(&pat_id) { // Then this is a duplicate variable in the @@ -3759,7 +3759,7 @@ impl<'a, 'tcx> Resolver<'a, 'tcx> { &format!("identifier `{}` is bound \ more than once in the same \ pattern", - token::get_ident(ident))[]); + token::get_ident(ident))); } // Else, not bound in the same pattern: do // nothing. @@ -3883,7 +3883,7 @@ impl<'a, 'tcx> Resolver<'a, 'tcx> { match err { Some((span, msg)) => { self.resolve_error(span, &format!("failed to resolve: {}", - msg)[]); + msg)); } None => () } @@ -4093,7 +4093,7 @@ impl<'a, 'tcx> Resolver<'a, 'tcx> { }; self.resolve_error(span, &format!("failed to resolve. {}", - msg)[]); + msg)); return None; } Indeterminate => panic!("indeterminate unexpected"), @@ -4152,7 +4152,7 @@ impl<'a, 'tcx> Resolver<'a, 'tcx> { }; self.resolve_error(span, &format!("failed to resolve. {}", - msg)[]); + msg)); return None; } @@ -4193,7 +4193,7 @@ impl<'a, 'tcx> Resolver<'a, 'tcx> { } TypeNS => { let name = ident.name; - self.search_ribs(&self.type_ribs[], name, span) + self.search_ribs(&self.type_ribs, name, span) } }; @@ -4248,7 +4248,7 @@ impl<'a, 'tcx> Resolver<'a, 'tcx> { match err { Some((span, msg)) => self.resolve_error(span, &format!("failed to resolve. {}", - msg)[]), + msg)), None => () } @@ -4410,7 +4410,7 @@ impl<'a, 'tcx> Resolver<'a, 'tcx> { values[smallest] != usize::MAX && values[smallest] < name.len() + 2 && values[smallest] <= max_distance && - name != &maybes[smallest][] { + name != &maybes[smallest][..] { Some(maybes[smallest].to_string()) @@ -4502,7 +4502,7 @@ impl<'a, 'tcx> Resolver<'a, 'tcx> { false // Stop advancing }); - if method_scope && &token::get_name(self.self_name)[] + if method_scope && &token::get_name(self.self_name)[..] == path_name { self.resolve_error( expr.span, @@ -4592,7 +4592,7 @@ impl<'a, 'tcx> Resolver<'a, 'tcx> { self.resolve_error( expr.span, &format!("use of undeclared label `{}`", - token::get_ident(label))[]) + token::get_ident(label))) } Some(DlDef(def @ DefLabel(_))) => { // Since this def is a label, it is never read. @@ -4731,7 +4731,7 @@ impl<'a, 'tcx> Resolver<'a, 'tcx> { then {:?}", node_id, *entry.get(), - def)[]); + def)); }, Vacant(entry) => { entry.insert(def); }, } @@ -4747,7 +4747,7 @@ impl<'a, 'tcx> Resolver<'a, 'tcx> { self.resolve_error(pat.span, &format!("cannot use `ref` binding mode \ with {}", - descr)[]); + descr)); } } } @@ -4783,7 +4783,7 @@ impl<'a, 'tcx> Resolver<'a, 'tcx> { return "???".to_string(); } self.names_to_string(&names.into_iter().rev() - .collect::>()[]) + .collect::>()) } #[allow(dead_code)] // useful for debugging diff --git a/src/librustc_trans/back/link.rs b/src/librustc_trans/back/link.rs index ef849bb3dca05..ea5001aa814b4 100644 --- a/src/librustc_trans/back/link.rs +++ b/src/librustc_trans/back/link.rs @@ -191,17 +191,17 @@ fn symbol_hash<'tcx>(tcx: &ty::ctxt<'tcx>, // to be independent of one another in the crate. symbol_hasher.reset(); - symbol_hasher.input_str(&link_meta.crate_name[]); + symbol_hasher.input_str(&link_meta.crate_name); symbol_hasher.input_str("-"); symbol_hasher.input_str(link_meta.crate_hash.as_str()); for meta in &*tcx.sess.crate_metadata.borrow() { symbol_hasher.input_str(&meta[..]); } symbol_hasher.input_str("-"); - symbol_hasher.input_str(&encoder::encoded_ty(tcx, t)[]); + symbol_hasher.input_str(&encoder::encoded_ty(tcx, t)); // Prefix with 'h' so that it never blends into adjacent digits let mut hash = String::from_str("h"); - hash.push_str(&truncated_hash_result(symbol_hasher)[]); + hash.push_str(&truncated_hash_result(symbol_hasher)); hash } @@ -288,7 +288,7 @@ pub fn mangle>(path: PI, fn push(n: &mut String, s: &str) { let sani = sanitize(s); - n.push_str(&format!("{}{}", sani.len(), sani)[]); + n.push_str(&format!("{}{}", sani.len(), sani)); } // First, connect each component with pairs. @@ -361,7 +361,7 @@ pub fn remove(sess: &Session, path: &Path) { Err(e) => { sess.err(&format!("failed to remove {}: {}", path.display(), - e)[]); + e)); } } } @@ -376,7 +376,7 @@ pub fn link_binary(sess: &Session, for &crate_type in &*sess.crate_types.borrow() { if invalid_output_for_target(sess, crate_type) { sess.bug(&format!("invalid output type `{:?}` for target os `{}`", - crate_type, sess.opts.target_triple)[]); + crate_type, sess.opts.target_triple)); } let out_file = link_binary_output(sess, trans, crate_type, outputs, crate_name); @@ -441,8 +441,8 @@ pub fn filename_for_input(sess: &Session, out_filename.with_filename(format!("lib{}.rlib", libname)) } config::CrateTypeDylib => { - let (prefix, suffix) = (&sess.target.target.options.dll_prefix[], - &sess.target.target.options.dll_suffix[]); + let (prefix, suffix) = (&sess.target.target.options.dll_prefix, + &sess.target.target.options.dll_suffix); out_filename.with_filename(format!("{}{}{}", prefix, libname, @@ -452,7 +452,7 @@ pub fn filename_for_input(sess: &Session, out_filename.with_filename(format!("lib{}.a", libname)) } config::CrateTypeExecutable => { - let suffix = &sess.target.target.options.exe_suffix[]; + let suffix = &sess.target.target.options.exe_suffix; out_filename.with_filename(format!("{}{}", libname, suffix)) } } @@ -481,12 +481,12 @@ fn link_binary_output(sess: &Session, if !out_is_writeable { sess.fatal(&format!("output file {} is not writeable -- check its \ permissions.", - out_filename.display())[]); + out_filename.display())); } else if !obj_is_writeable { sess.fatal(&format!("object file {} is not writeable -- check its \ permissions.", - obj_filename.display())[]); + obj_filename.display())); } match crate_type { @@ -588,12 +588,12 @@ fn link_rlib<'a>(sess: &'a Session, // the same filename for metadata (stomping over one another) let tmpdir = TempDir::new("rustc").ok().expect("needs a temp dir"); let metadata = tmpdir.path().join(METADATA_FILENAME); - match fs::File::create(&metadata).write_all(&trans.metadata[]) { + match fs::File::create(&metadata).write_all(&trans.metadata) { Ok(..) => {} Err(e) => { sess.err(&format!("failed to write {}: {}", metadata.display(), - e)[]); + e)); sess.abort_if_errors(); } } @@ -611,25 +611,25 @@ fn link_rlib<'a>(sess: &'a Session, // was exactly 16 bytes. let bc_filename = obj_filename.with_extension(&format!("{}.bc", i)); let bc_deflated_filename = obj_filename.with_extension( - &format!("{}.bytecode.deflate", i)[]); + &format!("{}.bytecode.deflate", i)); let bc_data = match fs::File::open(&bc_filename).read_to_end() { Ok(buffer) => buffer, Err(e) => sess.fatal(&format!("failed to read bytecode: {}", - e)[]) + e)) }; let bc_data_deflated = match flate::deflate_bytes(&bc_data[..]) { Some(compressed) => compressed, None => sess.fatal(&format!("failed to compress bytecode from {}", - bc_filename.display())[]) + bc_filename.display())) }; let mut bc_file_deflated = match fs::File::create(&bc_deflated_filename) { Ok(file) => file, Err(e) => { sess.fatal(&format!("failed to create compressed bytecode \ - file: {}", e)[]) + file: {}", e)) } }; @@ -638,7 +638,7 @@ fn link_rlib<'a>(sess: &'a Session, Ok(()) => {} Err(e) => { sess.err(&format!("failed to write compressed bytecode: \ - {}", e)[]); + {}", e)); sess.abort_if_errors() } }; @@ -729,7 +729,7 @@ fn link_staticlib(sess: &Session, obj_filename: &Path, out_filename: &Path) { let p = match *path { Some(ref p) => p.clone(), None => { sess.err(&format!("could not find rlib for: `{}`", - name)[]); + name)); continue } }; @@ -755,7 +755,7 @@ fn link_staticlib(sess: &Session, obj_filename: &Path, out_filename: &Path) { cstore::NativeUnknown => "library", cstore::NativeFramework => "framework", }; - sess.note(&format!("{}: {}", name, *lib)[]); + sess.note(&format!("{}: {}", name, *lib)); } } @@ -771,10 +771,10 @@ fn link_natively(sess: &Session, trans: &CrateTranslation, dylib: bool, let pname = get_cc_prog(sess); let mut cmd = Command::new(&pname[..]); - cmd.args(&sess.target.target.options.pre_link_args[]); + cmd.args(&sess.target.target.options.pre_link_args); link_args(&mut cmd, sess, dylib, tmpdir.path(), trans, obj_filename, out_filename); - cmd.args(&sess.target.target.options.post_link_args[]); + cmd.args(&sess.target.target.options.post_link_args); if !sess.target.target.options.no_compiler_rt { cmd.arg("-lcompiler-rt"); } @@ -794,10 +794,10 @@ fn link_natively(sess: &Session, trans: &CrateTranslation, dylib: bool, if !prog.status.success() { sess.err(&format!("linking with `{}` failed: {}", pname, - prog.status)[]); - sess.note(&format!("{:?}", &cmd)[]); + prog.status)); + sess.note(&format!("{:?}", &cmd)); let mut output = prog.error.clone(); - output.push_all(&prog.output[]); + output.push_all(&prog.output); sess.note(str::from_utf8(&output[..]).unwrap()); sess.abort_if_errors(); } @@ -807,7 +807,7 @@ fn link_natively(sess: &Session, trans: &CrateTranslation, dylib: bool, Err(e) => { sess.err(&format!("could not exec the linker `{}`: {}", pname, - e)[]); + e)); sess.abort_if_errors(); } } @@ -819,7 +819,7 @@ fn link_natively(sess: &Session, trans: &CrateTranslation, dylib: bool, match Command::new("dsymutil").arg(out_filename).output() { Ok(..) => {} Err(e) => { - sess.err(&format!("failed to run dsymutil: {}", e)[]); + sess.err(&format!("failed to run dsymutil: {}", e)); sess.abort_if_errors(); } } @@ -1005,7 +1005,7 @@ fn link_args(cmd: &mut Command, // addl_lib_search_paths if sess.opts.cg.rpath { let sysroot = sess.sysroot(); - let target_triple = &sess.opts.target_triple[]; + let target_triple = &sess.opts.target_triple; let get_install_prefix_lib_path = || { let install_prefix = option_env!("CFG_PREFIX").expect("CFG_PREFIX"); let tlib = filesearch::relative_target_lib_path(sysroot, target_triple); @@ -1022,13 +1022,13 @@ fn link_args(cmd: &mut Command, get_install_prefix_lib_path: get_install_prefix_lib_path, realpath: ::util::fs::realpath }; - cmd.args(&rpath::get_rpath_flags(rpath_config)[]); + cmd.args(&rpath::get_rpath_flags(rpath_config)); } // Finally add all the linker arguments provided on the command line along // with any #[link_args] attributes found inside the crate let empty = Vec::new(); - cmd.args(&sess.opts.cg.link_args.as_ref().unwrap_or(&empty)[]); + cmd.args(&sess.opts.cg.link_args.as_ref().unwrap_or(&empty)); cmd.args(&used_link_args[..]); } @@ -1189,7 +1189,7 @@ fn add_upstream_rust_crates(cmd: &mut Command, sess: &Session, let name = cratepath.filename_str().unwrap(); let name = &name[3..name.len() - 5]; // chop off lib/.rlib time(sess.time_passes(), - &format!("altering {}.rlib", name)[], + &format!("altering {}.rlib", name), (), |()| { let dst = tmpdir.join(cratepath.filename().unwrap()); match fs::copy(&cratepath, &dst) { @@ -1198,7 +1198,7 @@ fn add_upstream_rust_crates(cmd: &mut Command, sess: &Session, sess.err(&format!("failed to copy {} to {}: {}", cratepath.display(), dst.display(), - e)[]); + e)); sess.abort_if_errors(); } } @@ -1210,7 +1210,7 @@ fn add_upstream_rust_crates(cmd: &mut Command, sess: &Session, Err(e) => { sess.err(&format!("failed to chmod {} when preparing \ for LTO: {}", dst.display(), - e)[]); + e)); sess.abort_if_errors(); } } @@ -1224,9 +1224,9 @@ fn add_upstream_rust_crates(cmd: &mut Command, sess: &Session, maybe_ar_prog: sess.opts.cg.ar.clone() }; let mut archive = Archive::open(config); - archive.remove_file(&format!("{}.o", name)[]); + archive.remove_file(&format!("{}.o", name)); let files = archive.files(); - if files.iter().any(|s| s[].ends_with(".o")) { + if files.iter().any(|s| s.ends_with(".o")) { cmd.arg(dst); } }); diff --git a/src/librustc_trans/back/lto.rs b/src/librustc_trans/back/lto.rs index 0a0f2a9c18627..9507da2febbdb 100644 --- a/src/librustc_trans/back/lto.rs +++ b/src/librustc_trans/back/lto.rs @@ -54,7 +54,7 @@ pub fn run(sess: &session::Session, llmod: ModuleRef, Some(p) => p, None => { sess.fatal(&format!("could not find rlib for: `{}`", - name)[]); + name)); } }; @@ -68,7 +68,7 @@ pub fn run(sess: &session::Session, llmod: ModuleRef, (), |_| { archive.read(&format!("{}.{}.bytecode.deflate", - file, i)[]) + file, i)) }); let bc_encoded = match bc_encoded { Some(data) => data, @@ -76,7 +76,7 @@ pub fn run(sess: &session::Session, llmod: ModuleRef, if i == 0 { // No bitcode was found at all. sess.fatal(&format!("missing compressed bytecode in {}", - path.display())[]); + path.display())); } // No more bitcode files to read. break; @@ -99,12 +99,12 @@ pub fn run(sess: &session::Session, llmod: ModuleRef, Some(inflated) => inflated, None => { sess.fatal(&format!("failed to decompress bc of `{}`", - name)[]) + name)) } } } else { sess.fatal(&format!("Unsupported bytecode format version {}", - version)[]) + version)) } }) } else { @@ -115,7 +115,7 @@ pub fn run(sess: &session::Session, llmod: ModuleRef, Some(bc) => bc, None => { sess.fatal(&format!("failed to decompress bc of `{}`", - name)[]) + name)) } } }) @@ -124,7 +124,7 @@ pub fn run(sess: &session::Session, llmod: ModuleRef, let ptr = bc_decoded.as_ptr(); debug!("linking {}, part {}", name, i); time(sess.time_passes(), - &format!("ll link {}.{}", name, i)[], + &format!("ll link {}.{}", name, i), (), |()| unsafe { if !llvm::LLVMRustLinkInExternalBitcode(llmod, diff --git a/src/librustc_trans/back/write.rs b/src/librustc_trans/back/write.rs index 86b720d3fc171..a1fc63778ce8a 100644 --- a/src/librustc_trans/back/write.rs +++ b/src/librustc_trans/back/write.rs @@ -54,7 +54,7 @@ pub fn llvm_err(handler: &diagnostic::Handler, msg: String) -> ! { libc::free(cstr as *mut _); handler.fatal(&format!("{}: {}", &msg[..], - &err[..])[]); + &err[..])); } } } @@ -104,13 +104,13 @@ impl SharedEmitter { match diag.code { Some(ref code) => { handler.emit_with_code(None, - &diag.msg[], + &diag.msg, &code[..], diag.lvl); }, None => { handler.emit(None, - &diag.msg[], + &diag.msg, diag.lvl); }, } @@ -166,7 +166,7 @@ fn get_llvm_opt_level(optimize: config::OptLevel) -> llvm::CodeGenOptLevel { fn create_target_machine(sess: &Session) -> TargetMachineRef { let reloc_model_arg = match sess.opts.cg.relocation_model { Some(ref s) => &s[..], - None => &sess.target.target.options.relocation_model[] + None => &sess.target.target.options.relocation_model[..], }; let reloc_model = match reloc_model_arg { "pic" => llvm::RelocPIC, @@ -177,7 +177,7 @@ fn create_target_machine(sess: &Session) -> TargetMachineRef { sess.err(&format!("{:?} is not a valid relocation mode", sess.opts .cg - .relocation_model)[]); + .relocation_model)); sess.abort_if_errors(); unreachable!(); } @@ -199,7 +199,7 @@ fn create_target_machine(sess: &Session) -> TargetMachineRef { let code_model_arg = match sess.opts.cg.code_model { Some(ref s) => &s[..], - None => &sess.target.target.options.code_model[] + None => &sess.target.target.options.code_model[..], }; let code_model = match code_model_arg { @@ -212,13 +212,13 @@ fn create_target_machine(sess: &Session) -> TargetMachineRef { sess.err(&format!("{:?} is not a valid code model", sess.opts .cg - .code_model)[]); + .code_model)); sess.abort_if_errors(); unreachable!(); } }; - let triple = &sess.target.target.llvm_target[]; + let triple = &sess.target.target.llvm_target; let tm = unsafe { let triple = CString::new(triple.as_bytes()).unwrap(); @@ -526,14 +526,14 @@ unsafe fn optimize_and_codegen(cgcx: &CodegenContext, } if config.emit_asm { - let path = output_names.with_extension(&format!("{}.s", name_extra)[]); + let path = output_names.with_extension(&format!("{}.s", name_extra)); with_codegen(tm, llmod, config.no_builtins, |cpm| { write_output_file(cgcx.handler, tm, cpm, llmod, &path, llvm::AssemblyFileType); }); } if config.emit_obj { - let path = output_names.with_extension(&format!("{}.o", name_extra)[]); + let path = output_names.with_extension(&format!("{}.o", name_extra)); with_codegen(tm, llmod, config.no_builtins, |cpm| { write_output_file(cgcx.handler, tm, cpm, llmod, &path, llvm::ObjectFileType); }); @@ -647,7 +647,7 @@ pub fn run_passes(sess: &Session, // Process the work items, optionally using worker threads. if sess.opts.cg.codegen_units == 1 { - run_work_singlethreaded(sess, &trans.reachable[], work_items); + run_work_singlethreaded(sess, &trans.reachable, work_items); } else { run_work_multithreaded(sess, work_items, sess.opts.cg.codegen_units); } @@ -679,7 +679,7 @@ pub fn run_passes(sess: &Session, // 2) Multiple codegen units, with `-o some_name`. We have // no good solution for this case, so warn the user. sess.warn(&format!("ignoring -o because multiple .{} files were produced", - ext)[]); + ext)); } else { // 3) Multiple codegen units, but no `-o some_name`. We // just leave the `foo.0.x` files in place. @@ -713,18 +713,18 @@ pub fn run_passes(sess: &Session, let pname = get_cc_prog(sess); let mut cmd = Command::new(&pname[..]); - cmd.args(&sess.target.target.options.pre_link_args[]); + cmd.args(&sess.target.target.options.pre_link_args); cmd.arg("-nostdlib"); for index in 0..trans.modules.len() { - cmd.arg(crate_output.with_extension(&format!("{}.o", index)[])); + cmd.arg(crate_output.with_extension(&format!("{}.o", index))); } cmd.arg("-r") .arg("-o") .arg(windows_output_path.as_ref().unwrap_or(output_path)); - cmd.args(&sess.target.target.options.post_link_args[]); + cmd.args(&sess.target.target.options.post_link_args); if sess.opts.debugging_opts.print_link_args { println!("{:?}", &cmd); @@ -737,14 +737,14 @@ pub fn run_passes(sess: &Session, Ok(status) => { if !status.success() { sess.err(&format!("linking of {} with `{:?}` failed", - output_path.display(), cmd)[]); + output_path.display(), cmd)); sess.abort_if_errors(); } }, Err(e) => { sess.err(&format!("could not exec the linker `{}`: {}", pname, - e)[]); + e)); sess.abort_if_errors(); }, } @@ -971,10 +971,10 @@ pub fn run_assembler(sess: &Session, outputs: &OutputFilenames) { if !prog.status.success() { sess.err(&format!("linking with `{}` failed: {}", pname, - prog.status)[]); - sess.note(&format!("{:?}", &cmd)[]); + prog.status)); + sess.note(&format!("{:?}", &cmd)); let mut note = prog.error.clone(); - note.push_all(&prog.output[]); + note.push_all(&prog.output); sess.note(str::from_utf8(¬e[..]).unwrap()); sess.abort_if_errors(); } @@ -982,7 +982,7 @@ pub fn run_assembler(sess: &Session, outputs: &OutputFilenames) { Err(e) => { sess.err(&format!("could not exec the linker `{}`: {}", pname, - e)[]); + e)); sess.abort_if_errors(); } } @@ -1018,7 +1018,7 @@ unsafe fn configure_llvm(sess: &Session) { if sess.target.target.arch == "aarch64" { add("-fast-isel=0"); } for arg in &sess.opts.cg.llvm_args { - add(&(*arg)[]); + add(&(*arg)); } } diff --git a/src/librustc_trans/save/mod.rs b/src/librustc_trans/save/mod.rs index 8d2a2d51ee423..28dcbe3ae86b2 100644 --- a/src/librustc_trans/save/mod.rs +++ b/src/librustc_trans/save/mod.rs @@ -94,7 +94,7 @@ impl <'l, 'tcx> DxrVisitor<'l, 'tcx> { // dump info about all the external crates referenced from this crate self.sess.cstore.iter_crate_data(|n, cmd| { - self.fmt.external_crate_str(krate.span, &cmd.name[], n); + self.fmt.external_crate_str(krate.span, &cmd.name, n); }); self.fmt.recorder.record("end_external_crates\n"); } @@ -216,7 +216,7 @@ impl <'l, 'tcx> DxrVisitor<'l, 'tcx> { fn lookup_type_ref(&self, ref_id: NodeId) -> Option { if !self.analysis.ty_cx.def_map.borrow().contains_key(&ref_id) { self.sess.bug(&format!("def_map has no key for {} in lookup_type_ref", - ref_id)[]); + ref_id)); } let def = (*self.analysis.ty_cx.def_map.borrow())[ref_id]; match def { @@ -229,7 +229,7 @@ impl <'l, 'tcx> DxrVisitor<'l, 'tcx> { let def_map = self.analysis.ty_cx.def_map.borrow(); if !def_map.contains_key(&ref_id) { self.sess.span_bug(span, &format!("def_map has no key for {} in lookup_def_kind", - ref_id)[]); + ref_id)); } let def = (*def_map)[ref_id]; match def { @@ -258,7 +258,7 @@ impl <'l, 'tcx> DxrVisitor<'l, 'tcx> { def::DefMethod(..) | def::DefPrimTy(_) => { self.sess.span_bug(span, &format!("lookup_def_kind for unexpected item: {:?}", - def)[]); + def)); }, } } @@ -279,7 +279,7 @@ impl <'l, 'tcx> DxrVisitor<'l, 'tcx> { span_utils.span_for_last_ident(p.span), id, qualname, - &path_to_string(p)[], + &path_to_string(p), &typ[..]); } self.collected_paths.clear(); @@ -302,7 +302,7 @@ impl <'l, 'tcx> DxrVisitor<'l, 'tcx> { match item.node { ast::ItemImpl(_, _, _, _, ref ty, _) => { let mut result = String::from_str("<"); - result.push_str(&ty_to_string(&**ty)[]); + result.push_str(&ty_to_string(&**ty)); match ty::trait_of_item(&self.analysis.ty_cx, ast_util::local_def(method.id)) { @@ -319,7 +319,7 @@ impl <'l, 'tcx> DxrVisitor<'l, 'tcx> { _ => { self.sess.span_bug(method.span, &format!("Container {} for method {} not an impl?", - impl_id.node, method.id)[]); + impl_id.node, method.id)); }, } }, @@ -329,7 +329,7 @@ impl <'l, 'tcx> DxrVisitor<'l, 'tcx> { "Container {} for method {} is not a node item {:?}", impl_id.node, method.id, - self.analysis.ty_cx.map.get(impl_id.node))[]); + self.analysis.ty_cx.map.get(impl_id.node))); }, }, None => match ty::trait_of_item(&self.analysis.ty_cx, @@ -343,14 +343,14 @@ impl <'l, 'tcx> DxrVisitor<'l, 'tcx> { _ => { self.sess.span_bug(method.span, &format!("Could not find container {} for method {}", - def_id.node, method.id)[]); + def_id.node, method.id)); } } }, None => { self.sess.span_bug(method.span, &format!("Could not find container for method {}", - method.id)[]); + method.id)); }, }, }; @@ -442,7 +442,7 @@ impl <'l, 'tcx> DxrVisitor<'l, 'tcx> { scope_id), None => self.sess.span_bug(field.span, &format!("Could not find sub-span for field {}", - qualname)[]), + qualname)), } }, _ => (), @@ -528,7 +528,7 @@ impl <'l, 'tcx> DxrVisitor<'l, 'tcx> { &get_ident(item.ident), &qualname[..], &value[..], - &ty_to_string(&*typ)[], + &ty_to_string(&*typ), self.cur_scope); // walk type and init value @@ -551,7 +551,7 @@ impl <'l, 'tcx> DxrVisitor<'l, 'tcx> { &get_ident(item.ident), &qualname[..], "", - &ty_to_string(&*typ)[], + &ty_to_string(&*typ), self.cur_scope); // walk type and init value @@ -603,7 +603,7 @@ impl <'l, 'tcx> DxrVisitor<'l, 'tcx> { &val[..]), None => self.sess.span_bug(item.span, &format!("Could not find subspan for enum {}", - enum_name)[]), + enum_name)), } for variant in &enum_definition.variants { let name = get_ident(variant.node.name); @@ -872,7 +872,7 @@ impl <'l, 'tcx> DxrVisitor<'l, 'tcx> { &format!("Unexpected def kind while looking \ up path in `{}`: `{:?}`", self.span.snippet(span), - *def)[]), + *def)), } // modules or types in the path prefix match *def { @@ -1007,7 +1007,7 @@ impl <'l, 'tcx> DxrVisitor<'l, 'tcx> { None => { self.sess.span_bug(p.span, &format!("Could not find struct_def for `{}`", - self.span.snippet(p.span))[]); + self.span.snippet(p.span))); } }; for &Spanned { node: ref field, span } in fields { @@ -1255,7 +1255,7 @@ impl<'l, 'tcx, 'v> Visitor<'v> for DxrVisitor<'l, 'tcx> { None => { self.sess.span_bug(method_type.span, &format!("Could not find trait for method {}", - method_type.id)[]); + method_type.id)); }, }; @@ -1362,7 +1362,7 @@ impl<'l, 'tcx, 'v> Visitor<'v> for DxrVisitor<'l, 'tcx> { } } _ => self.sess.span_bug(ex.span, - &format!("Expected struct type, found {:?}", ty)[]), + &format!("Expected struct type, found {:?}", ty)), } }, ast::ExprTupField(ref sub_ex, idx) => { @@ -1391,7 +1391,7 @@ impl<'l, 'tcx, 'v> Visitor<'v> for DxrVisitor<'l, 'tcx> { ty::ty_tup(_) => {} _ => self.sess.span_bug(ex.span, &format!("Expected struct or tuple \ - type, found {:?}", ty)[]), + type, found {:?}", ty)), } }, ast::ExprClosure(_, ref decl, ref body) => { @@ -1400,7 +1400,7 @@ impl<'l, 'tcx, 'v> Visitor<'v> for DxrVisitor<'l, 'tcx> { } let mut id = String::from_str("$"); - id.push_str(&ex.id.to_string()[]); + id.push_str(&ex.id.to_string()); self.process_formals(&decl.inputs, &id[..]); // walk arg and return types @@ -1448,7 +1448,7 @@ impl<'l, 'tcx, 'v> Visitor<'v> for DxrVisitor<'l, 'tcx> { if !def_map.contains_key(&id) { self.sess.span_bug(p.span, &format!("def_map has no key for {} in visit_arm", - id)[]); + id)); } let def = &(*def_map)[id]; match *def { @@ -1463,7 +1463,7 @@ impl<'l, 'tcx, 'v> Visitor<'v> for DxrVisitor<'l, 'tcx> { self.fmt.variable_str(p.span, Some(p.span), id, - &path_to_string(p)[], + &path_to_string(p), &value[..], "") } @@ -1519,7 +1519,7 @@ impl<'l, 'tcx, 'v> Visitor<'v> for DxrVisitor<'l, 'tcx> { self.fmt.variable_str(p.span, sub_span, id, - &path_to_string(p)[], + &path_to_string(p), &value[..], &typ[..]); } @@ -1540,7 +1540,7 @@ pub fn process_crate(sess: &Session, } assert!(analysis.glob_map.is_some()); - let cratename = match attr::find_crate_name(&krate.attrs[]) { + let cratename = match attr::find_crate_name(&krate.attrs) { Some(name) => name.to_string(), None => { info!("Could not find crate name, using 'unknown_crate'"); @@ -1561,7 +1561,7 @@ pub fn process_crate(sess: &Session, match fs::mkdir_recursive(&root_path, old_io::USER_RWX) { Err(e) => sess.err(&format!("Could not create directory {}: {}", - root_path.display(), e)[]), + root_path.display(), e)), _ => (), } @@ -1578,7 +1578,7 @@ pub fn process_crate(sess: &Session, Ok(f) => box f, Err(e) => { let disp = root_path.display(); - sess.fatal(&format!("Could not open {}: {}", disp, e)[]); + sess.fatal(&format!("Could not open {}: {}", disp, e)); } }; root_path.pop(); diff --git a/src/librustc_trans/save/recorder.rs b/src/librustc_trans/save/recorder.rs index 08e36bb1d85bb..937f2d07677aa 100644 --- a/src/librustc_trans/save/recorder.rs +++ b/src/librustc_trans/save/recorder.rs @@ -162,7 +162,7 @@ impl<'a> FmtStrs<'a> { if values.len() != fields.len() { self.span.sess.span_bug(span, &format!( "Mismatch between length of fields for '{}', expected '{}', found '{}'", - kind, fields.len(), values.len())[]); + kind, fields.len(), values.len())); } let values = values.iter().map(|s| { @@ -191,7 +191,7 @@ impl<'a> FmtStrs<'a> { if needs_span { self.span.sess.span_bug(span, &format!( "Called record_without_span for '{}' which does requires a span", - label)[]); + label)); } assert!(!dump_spans); @@ -268,7 +268,7 @@ impl<'a> FmtStrs<'a> { // variable def's node id let mut qualname = String::from_str(name); qualname.push_str("$"); - qualname.push_str(&id.to_string()[]); + qualname.push_str(&id.to_string()); self.check_and_record(Variable, span, sub_span, diff --git a/src/librustc_trans/save/span_utils.rs b/src/librustc_trans/save/span_utils.rs index 223d46e4e4a93..a5bebaa257ca0 100644 --- a/src/librustc_trans/save/span_utils.rs +++ b/src/librustc_trans/save/span_utils.rs @@ -219,7 +219,7 @@ impl<'a> SpanUtils<'a> { let loc = self.sess.codemap().lookup_char_pos(span.lo); self.sess.span_bug(span, &format!("Mis-counted brackets when breaking path? Parsing '{}' in {}, line {}", - self.snippet(span), loc.file.name, loc.line)[]); + self.snippet(span), loc.file.name, loc.line)); } if result.is_none() && prev.tok.is_ident() && bracket_count == 0 { return self.make_sub_span(span, Some(prev.sp)); @@ -245,7 +245,7 @@ impl<'a> SpanUtils<'a> { let loc = self.sess.codemap().lookup_char_pos(span.lo); self.sess.span_bug(span, &format!( "Mis-counted brackets when breaking path? Parsing '{}' in {}, line {}", - self.snippet(span), loc.file.name, loc.line)[]); + self.snippet(span), loc.file.name, loc.line)); } return result } diff --git a/src/librustc_trans/trans/_match.rs b/src/librustc_trans/trans/_match.rs index 2826afb71a2c2..1a24b3fabf898 100644 --- a/src/librustc_trans/trans/_match.rs +++ b/src/librustc_trans/trans/_match.rs @@ -444,7 +444,7 @@ fn enter_match<'a, 'b, 'p, 'blk, 'tcx, F>(bcx: Block<'blk, 'tcx>, let _indenter = indenter(); m.iter().filter_map(|br| { - e(&br.pats[]).map(|pats| { + e(&br.pats).map(|pats| { let this = br.pats[col]; let mut bound_ptrs = br.bound_ptrs.clone(); match this.node { @@ -825,7 +825,7 @@ fn compare_values<'blk, 'tcx>(cx: Block<'blk, 'tcx>, let did = langcall(cx, None, &format!("comparison of `{}`", - cx.ty_to_string(rhs_t))[], + cx.ty_to_string(rhs_t)), StrEqFnLangItem); let t = ty::mk_str_slice(cx.tcx(), cx.tcx().mk_region(ty::ReStatic), ast::MutImmutable); // The comparison function gets the slices by value, so we have to make copies here. Even @@ -1375,7 +1375,7 @@ fn create_bindings_map<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, pat: &ast::Pat, "__llmatch"); trmode = TrByCopy(alloca_no_lifetime(bcx, llvariable_ty, - &bcx.ident(ident)[])); + &bcx.ident(ident))); } ast::BindByValue(_) => { // in this case, the final type of the variable will be T, @@ -1383,13 +1383,13 @@ fn create_bindings_map<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, pat: &ast::Pat, // above llmatch = alloca_no_lifetime(bcx, llvariable_ty.ptr_to(), - &bcx.ident(ident)[]); + &bcx.ident(ident)); trmode = TrByMove; } ast::BindByRef(_) => { llmatch = alloca_no_lifetime(bcx, llvariable_ty, - &bcx.ident(ident)[]); + &bcx.ident(ident)); trmode = TrByRef; } }; @@ -1610,7 +1610,7 @@ fn mk_binding_alloca<'blk, 'tcx, A, F>(bcx: Block<'blk, 'tcx>, let var_ty = node_id_type(bcx, p_id); // Allocate memory on stack for the binding. - let llval = alloc_ty(bcx, var_ty, &bcx.ident(*ident)[]); + let llval = alloc_ty(bcx, var_ty, &bcx.ident(*ident)); // Subtle: be sure that we *populate* the memory *before* // we schedule the cleanup. @@ -1648,7 +1648,7 @@ fn bind_irrefutable_pat<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, if bcx.sess().asm_comments() { add_comment(bcx, &format!("bind_irrefutable_pat(pat={})", - pat.repr(bcx.tcx()))[]); + pat.repr(bcx.tcx()))); } let _indenter = indenter(); diff --git a/src/librustc_trans/trans/adt.rs b/src/librustc_trans/trans/adt.rs index eaf6eaa2f089d..903de94020770 100644 --- a/src/librustc_trans/trans/adt.rs +++ b/src/librustc_trans/trans/adt.rs @@ -177,7 +177,7 @@ fn represent_type_uncached<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, } ty::ty_enum(def_id, substs) => { let cases = get_cases(cx.tcx(), def_id, substs); - let hint = *ty::lookup_repr_hints(cx.tcx(), def_id)[].get(0) + let hint = *ty::lookup_repr_hints(cx.tcx(), def_id).get(0) .unwrap_or(&attr::ReprAny); let dtor = ty::ty_dtor(cx.tcx(), def_id).has_drop_flag(); @@ -210,7 +210,7 @@ fn represent_type_uncached<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, cx.sess().bug(&format!("non-C-like enum {} with specified \ discriminants", ty::item_path_str(cx.tcx(), - def_id))[]); + def_id))); } if cases.len() == 1 { @@ -228,7 +228,7 @@ fn represent_type_uncached<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, let mut discr = 0; while discr < 2 { if cases[1 - discr].is_zerolen(cx, t) { - let st = mk_struct(cx, &cases[discr].tys[], + let st = mk_struct(cx, &cases[discr].tys, false, t); match cases[discr].find_ptr(cx) { Some(ref df) if df.len() == 1 && st.fields.len() == 1 => { @@ -318,7 +318,7 @@ fn represent_type_uncached<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, let fields : Vec<_> = cases.iter().map(|c| { let mut ftys = vec!(ty_of_inttype(cx.tcx(), ity)); - ftys.push_all(&c.tys[]); + ftys.push_all(&c.tys); if dtor { ftys.push(cx.tcx().types.bool); } mk_struct(cx, &ftys[..], false, t) }).collect(); @@ -328,7 +328,7 @@ fn represent_type_uncached<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, General(ity, fields, dtor) } _ => cx.sess().bug(&format!("adt::represent_type called on non-ADT type: {}", - ty_to_string(cx.tcx(), t))[]) + ty_to_string(cx.tcx(), t))) } } @@ -414,7 +414,7 @@ fn find_discr_field_candidate<'tcx>(tcx: &ty::ctxt<'tcx>, impl<'tcx> Case<'tcx> { fn is_zerolen<'a>(&self, cx: &CrateContext<'a, 'tcx>, scapegoat: Ty<'tcx>) -> bool { - mk_struct(cx, &self.tys[], false, scapegoat).size == 0 + mk_struct(cx, &self.tys, false, scapegoat).size == 0 } fn find_ptr<'a>(&self, cx: &CrateContext<'a, 'tcx>) -> Option { @@ -504,7 +504,7 @@ fn range_to_inttype(cx: &CrateContext, hint: Hint, bounds: &IntBounds) -> IntTyp return ity; } attr::ReprExtern => { - attempts = match &cx.sess().target.target.arch[] { + attempts = match &cx.sess().target.target.arch[..] { // WARNING: the ARM EABI has two variants; the one corresponding to `at_least_32` // appears to be used on Linux and NetBSD, but some systems may use the variant // corresponding to `choose_shortest`. However, we don't run on those yet...? @@ -624,7 +624,7 @@ pub fn finish_type_of<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, match *r { CEnum(..) | General(..) | RawNullablePointer { .. } => { } Univariant(ref st, _) | StructWrappedNullablePointer { nonnull: ref st, .. } => - llty.set_struct_body(&struct_llfields(cx, st, false, false)[], + llty.set_struct_body(&struct_llfields(cx, st, false, false), st.packed) } } @@ -640,7 +640,7 @@ fn generic_type_of<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, Univariant(ref st, _) | StructWrappedNullablePointer { nonnull: ref st, .. } => { match name { None => { - Type::struct_(cx, &struct_llfields(cx, st, sizing, dst)[], + Type::struct_(cx, &struct_llfields(cx, st, sizing, dst), st.packed) } Some(name) => { assert_eq!(sizing, false); Type::named_struct(cx, name) } @@ -965,7 +965,7 @@ pub fn fold_variants<'blk, 'tcx, F>(bcx: Block<'blk, 'tcx>, for (discr, case) in cases.iter().enumerate() { let mut variant_cx = fcx.new_temp_block( - &format!("enum-variant-iter-{}", &discr.to_string())[] + &format!("enum-variant-iter-{}", &discr.to_string()) ); let rhs_val = C_integral(ll_inttype(ccx, ity), discr as u64, true); AddCase(llswitch, rhs_val, variant_cx.llbb); @@ -1070,7 +1070,7 @@ pub fn trans_const<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, r: &Repr<'tcx>, discr if discr == nndiscr { C_struct(ccx, &build_const_struct(ccx, nonnull, - vals)[], + vals), false) } else { let vals = nonnull.fields.iter().map(|&ty| { @@ -1080,7 +1080,7 @@ pub fn trans_const<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, r: &Repr<'tcx>, discr }).collect::>(); C_struct(ccx, &build_const_struct(ccx, nonnull, - &vals[..])[], + &vals[..]), false) } } diff --git a/src/librustc_trans/trans/base.rs b/src/librustc_trans/trans/base.rs index 3091c852f5587..9c0aa9f69576e 100644 --- a/src/librustc_trans/trans/base.rs +++ b/src/librustc_trans/trans/base.rs @@ -365,7 +365,7 @@ fn require_alloc_fn<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, Err(s) => { bcx.sess().fatal(&format!("allocation of `{}` {}", bcx.ty_to_string(info_ty), - s)[]); + s)); } } } @@ -444,7 +444,7 @@ pub fn set_llvm_fn_attrs(ccx: &CrateContext, attrs: &[ast::Attribute], llfn: Val for attr in attrs { let mut used = true; - match &attr.name()[] { + match &attr.name()[..] { "no_stack_check" => unset_split_stack(llfn), "no_split_stack" => { unset_split_stack(llfn); @@ -486,7 +486,7 @@ pub fn unset_split_stack(f: ValueRef) { // silently mangles such symbols, breaking our linkage model. pub fn note_unique_llvm_symbol(ccx: &CrateContext, sym: String) { if ccx.all_llvm_symbols().borrow().contains(&sym) { - ccx.sess().bug(&format!("duplicate LLVM symbol: {}", sym)[]); + ccx.sess().bug(&format!("duplicate LLVM symbol: {}", sym)); } ccx.all_llvm_symbols().borrow_mut().insert(sym); } @@ -541,7 +541,7 @@ pub fn bin_op_to_icmp_predicate(ccx: &CrateContext, op: ast::BinOp_, signed: boo ast::BiGe => if signed { llvm::IntSGE } else { llvm::IntUGE }, op => { ccx.sess().bug(&format!("comparison_op_to_icmp_predicate: expected \ - comparison operator, found {:?}", op)[]); + comparison operator, found {:?}", op)); } } } @@ -557,7 +557,7 @@ pub fn bin_op_to_fcmp_predicate(ccx: &CrateContext, op: ast::BinOp_) ast::BiGe => llvm::RealOGE, op => { ccx.sess().bug(&format!("comparison_op_to_fcmp_predicate: expected \ - comparison operator, found {:?}", op)[]); + comparison operator, found {:?}", op)); } } } @@ -735,8 +735,8 @@ pub fn iter_structural_ty<'blk, 'tcx, F>(cx: Block<'blk, 'tcx>, let variant_cx = fcx.new_temp_block( &format!("enum-iter-variant-{}", - &variant.disr_val.to_string()[]) - []); + &variant.disr_val.to_string()) + ); match adt::trans_case(cx, &*repr, variant.disr_val) { _match::SingleResult(r) => { AddCase(llswitch, r.val, variant_cx.llbb) @@ -761,7 +761,7 @@ pub fn iter_structural_ty<'blk, 'tcx, F>(cx: Block<'blk, 'tcx>, } _ => { cx.sess().unimpl(&format!("type in iter_structural_ty: {}", - ty_to_string(cx.tcx(), t))[]) + ty_to_string(cx.tcx(), t))) } } return cx; @@ -843,7 +843,7 @@ pub fn fail_if_zero_or_overflows<'blk, 'tcx>( } _ => { cx.sess().bug(&format!("fail-if-zero on unexpected type: {}", - ty_to_string(cx.tcx(), rhs_t))[]); + ty_to_string(cx.tcx(), rhs_t))); } }; let bcx = with_cond(cx, is_zero, |bcx| { @@ -1116,7 +1116,7 @@ pub fn call_lifetime_end(cx: Block, ptr: ValueRef) { pub fn call_memcpy(cx: Block, dst: ValueRef, src: ValueRef, n_bytes: ValueRef, align: u32) { let _icx = push_ctxt("call_memcpy"); let ccx = cx.ccx(); - let key = match &ccx.sess().target.target.target_pointer_width[] { + let key = match &ccx.sess().target.target.target_pointer_width[..] { "32" => "llvm.memcpy.p0i8.p0i8.i32", "64" => "llvm.memcpy.p0i8.p0i8.i64", tws => panic!("Unsupported target word size for memcpy: {}", tws), @@ -1163,7 +1163,7 @@ fn memzero<'a, 'tcx>(b: &Builder<'a, 'tcx>, llptr: ValueRef, ty: Ty<'tcx>) { let llty = type_of::type_of(ccx, ty); - let intrinsic_key = match &ccx.sess().target.target.target_pointer_width[] { + let intrinsic_key = match &ccx.sess().target.target.target_pointer_width[..] { "32" => "llvm.memset.p0i8.i32", "64" => "llvm.memset.p0i8.i64", tws => panic!("Unsupported target word size for memset: {}", tws), @@ -1833,14 +1833,14 @@ pub fn trans_closure<'a, 'b, 'tcx>(ccx: &CrateContext<'a, 'tcx>, closure::ClosureEnv::NotClosure => { copy_args_to_allocas(bcx, arg_scope, - &decl.inputs[], + &decl.inputs, arg_datums) } closure::ClosureEnv::Closure(_) => { copy_closure_args_to_allocas( bcx, arg_scope, - &decl.inputs[], + &decl.inputs, arg_datums, &monomorphized_arg_types[..]) } @@ -1964,7 +1964,7 @@ pub fn trans_named_tuple_constructor<'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>, _ => ccx.sess().bug( &format!("trans_enum_variant_constructor: \ unexpected ctor return type {}", - ctor_ty.repr(tcx))[]) + ctor_ty.repr(tcx))) }; // Get location to store the result. If the user does not care about @@ -2042,7 +2042,7 @@ fn trans_enum_variant_or_tuple_like_struct<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx _ => ccx.sess().bug( &format!("trans_enum_variant_or_tuple_like_struct: \ unexpected ctor return type {}", - ty_to_string(ccx.tcx(), ctor_ty))[]) + ty_to_string(ccx.tcx(), ctor_ty))) }; let (arena, fcx): (TypedArena<_>, FunctionContext); @@ -2143,7 +2143,7 @@ fn enum_variant_size_lint(ccx: &CrateContext, enum_def: &ast::EnumDef, sp: Span, *lvlsrc.unwrap(), Some(sp), &format!("enum variant is more than three times larger \ ({} bytes) than the next largest (ignoring padding)", - largest)[]); + largest)); ccx.sess().span_note(enum_def.variants[largest_index].span, "this variant is the largest"); @@ -2261,7 +2261,7 @@ pub fn trans_item(ccx: &CrateContext, item: &ast::Item) { match item.node { ast::ItemFn(ref decl, _fn_style, abi, ref generics, ref body) => { if !generics.is_type_parameterized() { - let trans_everywhere = attr::requests_inline(&item.attrs[]); + let trans_everywhere = attr::requests_inline(&item.attrs); // Ignore `trans_everywhere` for cross-crate inlined items // (`from_external`). `trans_item` will be called once for each // compilation unit that references the item, so it will still get @@ -2273,7 +2273,7 @@ pub fn trans_item(ccx: &CrateContext, item: &ast::Item) { foreign::trans_rust_fn_with_foreign_abi(ccx, &**decl, &**body, - &item.attrs[], + &item.attrs, llfn, empty_substs, item.id, @@ -2285,7 +2285,7 @@ pub fn trans_item(ccx: &CrateContext, item: &ast::Item) { llfn, empty_substs, item.id, - &item.attrs[]); + &item.attrs); } update_linkage(ccx, llfn, @@ -2332,7 +2332,7 @@ pub fn trans_item(ccx: &CrateContext, item: &ast::Item) { // Do static_assert checking. It can't really be done much earlier // because we need to get the value of the bool out of LLVM - if attr::contains_name(&item.attrs[], "static_assert") { + if attr::contains_name(&item.attrs, "static_assert") { if m == ast::MutMutable { ccx.sess().span_fatal(expr.span, "cannot have static_assert on a mutable \ @@ -2746,7 +2746,7 @@ pub fn get_item_val(ccx: &CrateContext, id: ast::NodeId) -> ValueRef { let val = match item { ast_map::NodeItem(i) => { let ty = ty::node_id_to_type(ccx.tcx(), i.id); - let sym = || exported_name(ccx, id, ty, &i.attrs[]); + let sym = || exported_name(ccx, id, ty, &i.attrs); let v = match i.node { ast::ItemStatic(_, _, ref expr) => { @@ -2773,13 +2773,13 @@ pub fn get_item_val(ccx: &CrateContext, id: ast::NodeId) -> ValueRef { if contains_null(&sym[..]) { ccx.sess().fatal( &format!("Illegal null byte in export_name \ - value: `{}`", sym)[]); + value: `{}`", sym)); } let buf = CString::new(sym.clone()).unwrap(); let g = llvm::LLVMAddGlobal(ccx.llmod(), llty, buf.as_ptr()); - if attr::contains_name(&i.attrs[], + if attr::contains_name(&i.attrs, "thread_local") { llvm::set_thread_local(g, true); } @@ -2798,19 +2798,19 @@ pub fn get_item_val(ccx: &CrateContext, id: ast::NodeId) -> ValueRef { sym, i.id) }; - set_llvm_fn_attrs(ccx, &i.attrs[], llfn); + set_llvm_fn_attrs(ccx, &i.attrs, llfn); llfn } _ => panic!("get_item_val: weird result in table") }; - match attr::first_attr_value_str_by_name(&i.attrs[], + match attr::first_attr_value_str_by_name(&i.attrs, "link_section") { Some(sect) => { if contains_null(§) { ccx.sess().fatal(&format!("Illegal null byte in link_section value: `{}`", - §)[]); + §)); } unsafe { let buf = CString::new(sect.as_bytes()).unwrap(); @@ -2876,7 +2876,7 @@ pub fn get_item_val(ccx: &CrateContext, id: ast::NodeId) -> ValueRef { let sym = exported_name(ccx, id, ty, - &enm.attrs[]); + &enm.attrs); llfn = match enm.node { ast::ItemEnum(_, _) => { @@ -2903,7 +2903,7 @@ pub fn get_item_val(ccx: &CrateContext, id: ast::NodeId) -> ValueRef { let sym = exported_name(ccx, id, ty, - &struct_item.attrs[]); + &struct_item.attrs); let llfn = register_fn(ccx, struct_item.span, sym, ctor_id, ty); set_inline_hint(llfn); @@ -2912,7 +2912,7 @@ pub fn get_item_val(ccx: &CrateContext, id: ast::NodeId) -> ValueRef { ref variant => { ccx.sess().bug(&format!("get_item_val(): unexpected variant: {:?}", - variant)[]) + variant)) } }; @@ -2933,10 +2933,10 @@ fn register_method(ccx: &CrateContext, id: ast::NodeId, m: &ast::Method) -> ValueRef { let mty = ty::node_id_to_type(ccx.tcx(), id); - let sym = exported_name(ccx, id, mty, &m.attrs[]); + let sym = exported_name(ccx, id, mty, &m.attrs); let llfn = register_fn(ccx, m.span, sym, id, mty); - set_llvm_fn_attrs(ccx, &m.attrs[], llfn); + set_llvm_fn_attrs(ccx, &m.attrs, llfn); llfn } @@ -3104,7 +3104,7 @@ pub fn trans_crate<'tcx>(analysis: ty::CrateAnalysis<'tcx>) let link_meta = link::build_link_meta(&tcx.sess, krate, name); let codegen_units = tcx.sess.opts.cg.codegen_units; - let shared_ccx = SharedCrateContext::new(&link_meta.crate_name[], + let shared_ccx = SharedCrateContext::new(&link_meta.crate_name, codegen_units, tcx, export_map, @@ -3206,7 +3206,7 @@ pub fn trans_crate<'tcx>(analysis: ty::CrateAnalysis<'tcx>) llmod: shared_ccx.metadata_llmod(), }; let formats = shared_ccx.tcx().dependency_formats.borrow().clone(); - let no_builtins = attr::contains_name(&krate.attrs[], "no_builtins"); + let no_builtins = attr::contains_name(&krate.attrs, "no_builtins"); let translation = CrateTranslation { modules: modules, diff --git a/src/librustc_trans/trans/cabi.rs b/src/librustc_trans/trans/cabi.rs index 7abcdd07cc5da..0ff5264c00f0f 100644 --- a/src/librustc_trans/trans/cabi.rs +++ b/src/librustc_trans/trans/cabi.rs @@ -109,7 +109,7 @@ pub fn compute_abi_info(ccx: &CrateContext, atys: &[Type], rty: Type, ret_def: bool) -> FnType { - match &ccx.sess().target.target.arch[] { + match &ccx.sess().target.target.arch[..] { "x86" => cabi_x86::compute_abi_info(ccx, atys, rty, ret_def), "x86_64" => if ccx.sess().target.target.options.is_like_windows { cabi_x86_win64::compute_abi_info(ccx, atys, rty, ret_def) @@ -128,6 +128,6 @@ pub fn compute_abi_info(ccx: &CrateContext, "mips" => cabi_mips::compute_abi_info(ccx, atys, rty, ret_def), "powerpc" => cabi_powerpc::compute_abi_info(ccx, atys, rty, ret_def), a => ccx.sess().fatal(&format!("unrecognized arch \"{}\" in target specification", a) - []), + ), } } diff --git a/src/librustc_trans/trans/callee.rs b/src/librustc_trans/trans/callee.rs index 3d3e35cd776f0..1cc8f62045df8 100644 --- a/src/librustc_trans/trans/callee.rs +++ b/src/librustc_trans/trans/callee.rs @@ -118,7 +118,7 @@ fn trans<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, expr: &ast::Expr) expr.span, &format!("type of callee is neither bare-fn nor closure: \ {}", - bcx.ty_to_string(datum.ty))[]); + bcx.ty_to_string(datum.ty))); } } } @@ -215,7 +215,7 @@ fn trans<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, expr: &ast::Expr) bcx.tcx().sess.span_bug( ref_expr.span, &format!("cannot translate def {:?} \ - to a callable thing!", def)[]); + to a callable thing!", def)); } } } @@ -298,7 +298,7 @@ pub fn trans_fn_pointer_shim<'a, 'tcx>( _ => { tcx.sess.bug(&format!("trans_fn_pointer_shim invoked on invalid type: {}", - bare_fn_ty.repr(tcx))[]); + bare_fn_ty.repr(tcx))); } }; let sig = ty::erase_late_bound_regions(tcx, sig); diff --git a/src/librustc_trans/trans/cleanup.rs b/src/librustc_trans/trans/cleanup.rs index 85e53618f6da9..a3705a67cdc5a 100644 --- a/src/librustc_trans/trans/cleanup.rs +++ b/src/librustc_trans/trans/cleanup.rs @@ -513,7 +513,7 @@ impl<'blk, 'tcx> CleanupMethods<'blk, 'tcx> for FunctionContext<'blk, 'tcx> { self.ccx.sess().bug( &format!("no cleanup scope {} found", - self.ccx.tcx().map.node_to_string(cleanup_scope))[]); + self.ccx.tcx().map.node_to_string(cleanup_scope))); } /// Schedules a cleanup to occur in the top-most scope, which must be a temporary scope. @@ -695,7 +695,7 @@ impl<'blk, 'tcx> CleanupHelperMethods<'blk, 'tcx> for FunctionContext<'blk, 'tcx LoopExit(id, _) => { self.ccx.sess().bug(&format!( "cannot exit from scope {}, \ - not in scope", id)[]); + not in scope", id)); } } } @@ -1135,7 +1135,7 @@ pub fn temporary_scope(tcx: &ty::ctxt, } None => { tcx.sess.bug(&format!("no temporary scope available for expr {}", - id)[]) + id)) } } } diff --git a/src/librustc_trans/trans/common.rs b/src/librustc_trans/trans/common.rs index a9cda94bebac5..60725bf9b2a29 100644 --- a/src/librustc_trans/trans/common.rs +++ b/src/librustc_trans/trans/common.rs @@ -278,7 +278,7 @@ pub fn gensym_name(name: &str) -> PathElem { let num = token::gensym(name).usize(); // use one colon which will get translated to a period by the mangler, and // we're guaranteed that `num` is globally unique for this crate. - PathName(token::gensym(&format!("{}:{}", name, num)[])) + PathName(token::gensym(&format!("{}:{}", name, num))) } #[derive(Copy)] @@ -606,7 +606,7 @@ impl<'blk, 'tcx> BlockS<'blk, 'tcx> { Some(v) => v.clone(), None => { self.tcx().sess.bug(&format!( - "no def associated with node id {}", nid)[]); + "no def associated with node id {}", nid)); } } } @@ -1011,7 +1011,7 @@ pub fn fulfill_obligation<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, span, &format!("Encountered error `{}` selecting `{}` during trans", e.repr(tcx), - trait_ref.repr(tcx))[]) + trait_ref.repr(tcx))) } }; @@ -1104,7 +1104,7 @@ pub fn drain_fulfillment_cx<'a,'tcx,T>(span: Span, infcx.tcx.sess.span_bug( span, &format!("Encountered errors `{}` fulfilling during trans", - errors.repr(infcx.tcx))[]); + errors.repr(infcx.tcx))); } } } @@ -1144,7 +1144,7 @@ pub fn node_id_substs<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, if substs.types.any(|t| ty::type_needs_infer(*t)) { tcx.sess.bug(&format!("type parameters for node {:?} include inference types: {:?}", - node, substs.repr(tcx))[]); + node, substs.repr(tcx))); } monomorphize::apply_param_substs(tcx, diff --git a/src/librustc_trans/trans/consts.rs b/src/librustc_trans/trans/consts.rs index 7705b53ee38c6..3c0024712b23a 100644 --- a/src/librustc_trans/trans/consts.rs +++ b/src/librustc_trans/trans/consts.rs @@ -54,7 +54,7 @@ pub fn const_lit(cx: &CrateContext, e: &ast::Expr, lit: &ast::Lit) _ => cx.sess().span_bug(lit.span, &format!("integer literal has type {} (expected int \ or uint)", - ty_to_string(cx.tcx(), lit_int_ty))[]) + ty_to_string(cx.tcx(), lit_int_ty))) } } ast::LitFloat(ref fs, t) => { @@ -152,7 +152,7 @@ fn const_deref<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, } None => { cx.sess().bug(&format!("unexpected dereferenceable type {}", - ty_to_string(cx.tcx(), ty))[]) + ty_to_string(cx.tcx(), ty))) } } } @@ -174,7 +174,7 @@ pub fn get_const_expr<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, } else { ccx.sess().span_bug(ref_expr.span, &format!("get_const_val given non-constant item {}", - item.repr(ccx.tcx()))[]); + item.repr(ccx.tcx()))); } } @@ -301,7 +301,7 @@ pub fn const_expr<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, } None => { cx.sess().bug(&format!("unexpected dereferenceable type {}", - ty_to_string(cx.tcx(), ty))[]) + ty_to_string(cx.tcx(), ty))) } } } @@ -309,7 +309,7 @@ pub fn const_expr<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, } Some(autoref) => { cx.sess().span_bug(e.span, - &format!("unimplemented const first autoref {:?}", autoref)[]) + &format!("unimplemented const first autoref {:?}", autoref)) } }; match second_autoref { @@ -333,7 +333,7 @@ pub fn const_expr<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, } Some(autoref) => { cx.sess().span_bug(e.span, - &format!("unimplemented const second autoref {:?}", autoref)[]) + &format!("unimplemented const second autoref {:?}", autoref)) } } } @@ -351,7 +351,7 @@ pub fn const_expr<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, } cx.sess().bug(&format!("const {} of type {} has size {} instead of {}", e.repr(cx.tcx()), ty_to_string(cx.tcx(), ety_adjusted), - csize, tsize)[]); + csize, tsize)); } (llconst, ety_adjusted) } @@ -485,7 +485,7 @@ fn const_expr_unadjusted<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, _ => cx.sess().span_bug(base.span, &format!("index-expr base must be a vector \ or string type, found {}", - ty_to_string(cx.tcx(), bt))[]) + ty_to_string(cx.tcx(), bt))) }, ty::ty_rptr(_, mt) => match mt.ty.sty { ty::ty_vec(_, Some(u)) => { @@ -494,12 +494,12 @@ fn const_expr_unadjusted<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, _ => cx.sess().span_bug(base.span, &format!("index-expr base must be a vector \ or string type, found {}", - ty_to_string(cx.tcx(), bt))[]) + ty_to_string(cx.tcx(), bt))) }, _ => cx.sess().span_bug(base.span, &format!("index-expr base must be a vector \ or string type, found {}", - ty_to_string(cx.tcx(), bt))[]) + ty_to_string(cx.tcx(), bt))) }; let len = llvm::LLVMConstIntGetZExtValue(len) as u64; diff --git a/src/librustc_trans/trans/context.rs b/src/librustc_trans/trans/context.rs index eb07bdb7ba11b..3586a9dda2067 100644 --- a/src/librustc_trans/trans/context.rs +++ b/src/librustc_trans/trans/context.rs @@ -378,7 +378,7 @@ impl<'tcx> LocalCrateContext<'tcx> { .target .target .data_layout - []); + ); let dbg_cx = if shared.tcx.sess.opts.debuginfo != NoDebugInfo { Some(debuginfo::CrateDebugContext::new(llmod)) @@ -731,7 +731,7 @@ impl<'b, 'tcx> CrateContext<'b, 'tcx> { /// currently conservatively bounded to 1 << 47 as that is enough to cover the current usable /// address space on 64-bit ARMv8 and x86_64. pub fn obj_size_bound(&self) -> u64 { - match &self.sess().target.target.target_pointer_width[] { + match &self.sess().target.target.target_pointer_width[..] { "32" => 1 << 31, "64" => 1 << 47, _ => unreachable!() // error handled by config::build_target_config @@ -741,7 +741,7 @@ impl<'b, 'tcx> CrateContext<'b, 'tcx> { pub fn report_overbig_object(&self, obj: Ty<'tcx>) -> ! { self.sess().fatal( &format!("the type `{}` is too big for the current architecture", - obj.repr(self.tcx()))[]) + obj.repr(self.tcx()))) } } diff --git a/src/librustc_trans/trans/controlflow.rs b/src/librustc_trans/trans/controlflow.rs index 26e12a1af403d..6860cda8241c9 100644 --- a/src/librustc_trans/trans/controlflow.rs +++ b/src/librustc_trans/trans/controlflow.rs @@ -12,6 +12,7 @@ use llvm::ValueRef; use middle::def; use middle::lang_items::{PanicFnLangItem, PanicBoundsCheckFnLangItem}; use trans::base::*; +use trans::basic_block::BasicBlock; use trans::build::*; use trans::callee; use trans::cleanup::CleanupMethods; @@ -40,7 +41,7 @@ pub fn trans_stmt<'blk, 'tcx>(cx: Block<'blk, 'tcx>, debug!("trans_stmt({})", s.repr(cx.tcx())); if cx.sess().asm_comments() { - add_span_comment(cx, s.span, &s.repr(cx.tcx())[]); + add_span_comment(cx, s.span, &s.repr(cx.tcx())); } let mut bcx = cx; @@ -280,6 +281,12 @@ pub fn trans_loop<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, fcx.pop_loop_cleanup_scope(loop_expr.id); + // If there are no predecessors for the next block, we just translated an endless loop and the + // next block is unreachable + if BasicBlock(next_bcx_in.llbb).pred_iter().next().is_none() { + Unreachable(next_bcx_in); + } + return next_bcx_in; } @@ -303,7 +310,7 @@ pub fn trans_break_cont<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, Some(&def::DefLabel(loop_id)) => loop_id, ref r => { bcx.tcx().sess.bug(&format!("{:?} in def-map for label", - r)[]) + r)) } } } @@ -368,7 +375,7 @@ pub fn trans_fail<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, let v_str = C_str_slice(ccx, fail_str); let loc = bcx.sess().codemap().lookup_char_pos(call_info.span.lo); - let filename = token::intern_and_get_ident(&loc.file.name[]); + let filename = token::intern_and_get_ident(&loc.file.name); let filename = C_str_slice(ccx, filename); let line = C_uint(ccx, loc.line); let expr_file_line_const = C_struct(ccx, &[v_str, filename, line], false); @@ -395,7 +402,7 @@ pub fn trans_fail_bounds_check<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, // Extract the file/line from the span let loc = bcx.sess().codemap().lookup_char_pos(call_info.span.lo); - let filename = token::intern_and_get_ident(&loc.file.name[]); + let filename = token::intern_and_get_ident(&loc.file.name); // Invoke the lang item let filename = C_str_slice(ccx, filename); diff --git a/src/librustc_trans/trans/datum.rs b/src/librustc_trans/trans/datum.rs index f4a5ba98b6767..96211832c1cc9 100644 --- a/src/librustc_trans/trans/datum.rs +++ b/src/librustc_trans/trans/datum.rs @@ -557,7 +557,7 @@ impl<'tcx> Datum<'tcx, Lvalue> { } _ => bcx.tcx().sess.bug( &format!("Unexpected unsized type in get_element: {}", - bcx.ty_to_string(self.ty))[]) + bcx.ty_to_string(self.ty))) }; Datum { val: val, diff --git a/src/librustc_trans/trans/debuginfo.rs b/src/librustc_trans/trans/debuginfo.rs index fc0129239aac7..b5cba9b67406c 100644 --- a/src/librustc_trans/trans/debuginfo.rs +++ b/src/librustc_trans/trans/debuginfo.rs @@ -286,7 +286,7 @@ impl<'tcx> TypeMap<'tcx> { metadata: DIType) { if self.type_to_metadata.insert(type_, metadata).is_some() { cx.sess().bug(&format!("Type metadata for Ty '{}' is already in the TypeMap!", - ppaux::ty_to_string(cx.tcx(), type_))[]); + ppaux::ty_to_string(cx.tcx(), type_))); } } @@ -299,7 +299,7 @@ impl<'tcx> TypeMap<'tcx> { if self.unique_id_to_metadata.insert(unique_type_id, metadata).is_some() { let unique_type_id_str = self.get_unique_type_id_as_string(unique_type_id); cx.sess().bug(&format!("Type metadata for unique id '{}' is already in the TypeMap!", - &unique_type_id_str[..])[]); + &unique_type_id_str[..])); } } @@ -412,7 +412,7 @@ impl<'tcx> TypeMap<'tcx> { ty::ty_vec(inner_type, optional_length) => { match optional_length { Some(len) => { - unique_type_id.push_str(&format!("[{}]", len)[]); + unique_type_id.push_str(&format!("[{}]", len)); } None => { unique_type_id.push_str("[]"); @@ -481,8 +481,8 @@ impl<'tcx> TypeMap<'tcx> { }, _ => { cx.sess().bug(&format!("get_unique_type_id_of_type() - unexpected type: {}, {:?}", - &ppaux::ty_to_string(cx.tcx(), type_)[], - type_.sty)[]) + &ppaux::ty_to_string(cx.tcx(), type_), + type_.sty)) } }; @@ -525,7 +525,7 @@ impl<'tcx> TypeMap<'tcx> { output.push_str(crate_hash.as_str()); output.push_str("/"); - output.push_str(&format!("{:x}", def_id.node)[]); + output.push_str(&format!("{:x}", def_id.node)); // Maybe check that there is no self type here. @@ -600,7 +600,7 @@ impl<'tcx> TypeMap<'tcx> { -> UniqueTypeId { let enum_type_id = self.get_unique_type_id_of_type(cx, enum_type); let enum_variant_type_id = format!("{}::{}", - &self.get_unique_type_id_as_string(enum_type_id)[], + &self.get_unique_type_id_as_string(enum_type_id), variant_name); let interner_key = self.unique_id_interner.intern(Rc::new(enum_variant_type_id)); UniqueTypeId(interner_key) @@ -783,19 +783,19 @@ pub fn create_global_var_metadata(cx: &CrateContext, create_global_var_metadata() - Captured var-id refers to \ unexpected ast_item variant: {:?}", - var_item)[]) + var_item)) } } }, _ => cx.sess().bug(&format!("debuginfo::create_global_var_metadata() \ - Captured var-id refers to unexpected \ ast_map variant: {:?}", - var_item)[]) + var_item)) }; let (file_metadata, line_number) = if span != codemap::DUMMY_SP { let loc = span_start(cx, span); - (file_metadata(cx, &loc.file.name[]), loc.line as c_uint) + (file_metadata(cx, &loc.file.name), loc.line as c_uint) } else { (UNKNOWN_FILE_METADATA, UNKNOWN_LINE_NUMBER) }; @@ -847,7 +847,7 @@ pub fn create_local_var_metadata(bcx: Block, local: &ast::Local) { None => { bcx.sess().span_bug(span, &format!("no entry in lllocals table for {}", - node_id)[]); + node_id)); } }; @@ -903,7 +903,7 @@ pub fn create_captured_var_metadata<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, "debuginfo::create_captured_var_metadata() - \ Captured var-id refers to unexpected \ ast_map variant: {:?}", - ast_item)[]); + ast_item)); } } } @@ -913,7 +913,7 @@ pub fn create_captured_var_metadata<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, &format!("debuginfo::create_captured_var_metadata() - \ Captured var-id refers to unexpected \ ast_map variant: {:?}", - ast_item)[]); + ast_item)); } }; @@ -1025,7 +1025,7 @@ pub fn create_argument_metadata(bcx: Block, arg: &ast::Arg) { None => { bcx.sess().span_bug(span, &format!("no entry in lllocals table for {}", - node_id)[]); + node_id)); } }; @@ -1319,7 +1319,7 @@ pub fn create_function_debug_context<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, cx.sess() .bug(&format!("create_function_debug_context: \ unexpected sort of node: {:?}", - fnitem)[]) + fnitem)) } } } @@ -1330,7 +1330,7 @@ pub fn create_function_debug_context<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, } _ => cx.sess().bug(&format!("create_function_debug_context: \ unexpected sort of node: {:?}", - fnitem)[]) + fnitem)) }; // This can be the case for functions inlined from another crate @@ -1339,7 +1339,7 @@ pub fn create_function_debug_context<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, } let loc = span_start(cx, span); - let file_metadata = file_metadata(cx, &loc.file.name[]); + let file_metadata = file_metadata(cx, &loc.file.name); let function_type_metadata = unsafe { let fn_signature = get_function_signature(cx, @@ -1751,7 +1751,7 @@ fn scope_metadata(fcx: &FunctionContext, fcx.ccx.sess().span_bug(error_reporting_span, &format!("debuginfo: Could not find scope info for node {:?}", - node)[]); + node)); } } } @@ -1947,7 +1947,7 @@ impl<'tcx> RecursiveTypeDescription<'tcx> { cx.sess().bug(&format!("Forward declaration of potentially recursive type \ '{}' was not found in TypeMap!", ppaux::ty_to_string(cx.tcx(), unfinished_type)) - []); + ); } } @@ -2370,7 +2370,7 @@ fn describe_enum_variant<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, .iter() .map(|&t| type_of::type_of(cx, t)) .collect::>() - [], + , struct_def.packed); // Could do some consistency checks here: size, align, field count, discr type @@ -2437,7 +2437,7 @@ fn prepare_enum_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, let (containing_scope, definition_span) = get_namespace_and_span_for_item(cx, enum_def_id); let loc = span_start(cx, definition_span); - let file_metadata = file_metadata(cx, &loc.file.name[]); + let file_metadata = file_metadata(cx, &loc.file.name); let variants = ty::enum_variants(cx.tcx(), enum_def_id); @@ -2624,7 +2624,7 @@ fn set_members_of_composite_type(cx: &CrateContext, Please use a rustc built with anewer \ version of LLVM.", llvm_version_major, - llvm_version_minor)[]); + llvm_version_minor)); } else { cx.sess().bug("debuginfo::set_members_of_composite_type() - \ Already completed forward declaration re-encountered."); @@ -2786,7 +2786,7 @@ fn vec_slice_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, assert!(member_descriptions.len() == member_llvm_types.len()); let loc = span_start(cx, span); - let file_metadata = file_metadata(cx, &loc.file.name[]); + let file_metadata = file_metadata(cx, &loc.file.name); let metadata = composite_type_metadata(cx, slice_llvm_type, @@ -2865,7 +2865,7 @@ fn trait_pointer_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, let pp_type_name = ppaux::ty_to_string(cx.tcx(), trait_type); cx.sess().bug(&format!("debuginfo: Unexpected trait-object type in \ trait_pointer_metadata(): {}", - &pp_type_name[..])[]); + &pp_type_name[..])); } }; @@ -3005,7 +3005,7 @@ fn type_metadata<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, } _ => { cx.sess().bug(&format!("debuginfo: unexpected type in type_metadata: {:?}", - sty)[]) + sty)) } }; @@ -3248,7 +3248,7 @@ fn create_scope_map(cx: &CrateContext, { // Create a new lexical scope and push it onto the stack let loc = cx.sess().codemap().lookup_char_pos(scope_span.lo); - let file_metadata = file_metadata(cx, &loc.file.name[]); + let file_metadata = file_metadata(cx, &loc.file.name); let parent_scope = scope_stack.last().unwrap().scope_metadata; let scope_metadata = unsafe { @@ -3370,7 +3370,7 @@ fn create_scope_map(cx: &CrateContext, if need_new_scope { // Create a new lexical scope and push it onto the stack let loc = cx.sess().codemap().lookup_char_pos(pat.span.lo); - let file_metadata = file_metadata(cx, &loc.file.name[]); + let file_metadata = file_metadata(cx, &loc.file.name); let parent_scope = scope_stack.last().unwrap().scope_metadata; let scope_metadata = unsafe { @@ -3832,7 +3832,7 @@ fn push_debuginfo_type_name<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, ty::ty_projection(..) | ty::ty_param(_) => { cx.sess().bug(&format!("debuginfo: Trying to create type name for \ - unexpected type: {}", ppaux::ty_to_string(cx.tcx(), t))[]); + unexpected type: {}", ppaux::ty_to_string(cx.tcx(), t))); } } @@ -3915,13 +3915,13 @@ impl NamespaceTreeNode { None => {} } let string = token::get_name(node.name); - output.push_str(&format!("{}", string.len())[]); + output.push_str(&format!("{}", string.len())); output.push_str(&string); } let mut name = String::from_str("_ZN"); fill_nested(self, &mut name); - name.push_str(&format!("{}", item_name.len())[]); + name.push_str(&format!("{}", item_name.len())); name.push_str(item_name); name.push('E'); name @@ -3929,7 +3929,7 @@ impl NamespaceTreeNode { } fn crate_root_namespace<'a>(cx: &'a CrateContext) -> &'a str { - &cx.link_meta().crate_name[] + &cx.link_meta().crate_name } fn namespace_for_item(cx: &CrateContext, def_id: ast::DefId) -> Rc { @@ -4005,7 +4005,7 @@ fn namespace_for_item(cx: &CrateContext, def_id: ast::DefId) -> Rc { cx.sess().bug(&format!("debuginfo::namespace_for_item(): \ path too short for {:?}", - def_id)[]); + def_id)); } } }) diff --git a/src/librustc_trans/trans/expr.rs b/src/librustc_trans/trans/expr.rs index 1af9fa87c6b7d..78992959a6f22 100644 --- a/src/librustc_trans/trans/expr.rs +++ b/src/librustc_trans/trans/expr.rs @@ -308,7 +308,7 @@ pub fn unsized_info<'a, 'tcx, F>(ccx: &CrateContext<'a, 'tcx>, unsized_info(ccx, k, id, ty_substs[tp_index], param_substs, identity) } _ => ccx.sess().bug(&format!("UnsizeStruct with bad sty: {}", - unadjusted_ty.repr(ccx.tcx()))[]) + unadjusted_ty.repr(ccx.tcx()))) }, &ty::UnsizeVtable(ty::TyTrait { ref principal, .. }, _) => { // Note that we preserve binding levels here: @@ -524,7 +524,7 @@ fn apply_adjustments<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, let unboxed_ty = match datum_ty.sty { ty::ty_uniq(t) => t, _ => bcx.sess().bug(&format!("Expected ty_uniq, found {}", - bcx.ty_to_string(datum_ty))[]) + bcx.ty_to_string(datum_ty))) }; let result_ty = ty::mk_uniq(tcx, ty::unsize_ty(tcx, unboxed_ty, k, expr.span)); @@ -696,7 +696,7 @@ fn trans_datum_unadjusted<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, expr.span, &format!("trans_rvalue_datum_unadjusted reached \ fall-through case: {:?}", - expr.node)[]); + expr.node)); } } } @@ -1020,7 +1020,7 @@ fn trans_rvalue_stmt_unadjusted<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, expr.span, &format!("trans_rvalue_stmt_unadjusted reached \ fall-through case: {:?}", - expr.node)[]); + expr.node)); } } } @@ -1216,7 +1216,7 @@ fn trans_rvalue_dps_unadjusted<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, expr.span, &format!("trans_rvalue_dps_unadjusted reached fall-through \ case: {:?}", - expr.node)[]); + expr.node)); } } } @@ -1266,7 +1266,7 @@ fn trans_def_dps_unadjusted<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, _ => { bcx.tcx().sess.span_bug(ref_expr.span, &format!( "Non-DPS def {:?} referened by {}", - def, bcx.node_id_to_string(ref_expr.id))[]); + def, bcx.node_id_to_string(ref_expr.id))); } } } @@ -1295,7 +1295,7 @@ pub fn trans_def_fn_unadjusted<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, ccx.tcx().sess.span_bug(ref_expr.span, &format!( "trans_def_fn_unadjusted invoked on: {:?} for {}", def, - ref_expr.repr(ccx.tcx()))[]); + ref_expr.repr(ccx.tcx()))); } } } @@ -1315,7 +1315,7 @@ pub fn trans_local_var<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, None => { bcx.sess().bug(&format!( "trans_local_var: no llval for upvar {} found", - nid)[]); + nid)); } } } @@ -1325,7 +1325,7 @@ pub fn trans_local_var<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, None => { bcx.sess().bug(&format!( "trans_local_var: no datum for local/arg {} found", - nid)[]); + nid)); } }; debug!("take_local(nid={}, v={}, ty={})", @@ -1335,7 +1335,7 @@ pub fn trans_local_var<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, _ => { bcx.sess().unimpl(&format!( "unsupported def type in trans_local_var: {:?}", - def)[]); + def)); } } } @@ -1358,7 +1358,7 @@ pub fn with_field_tys<'tcx, R, F>(tcx: &ty::ctxt<'tcx>, } ty::ty_tup(ref v) => { - op(0, &tup_fields(&v[..])[]) + op(0, &tup_fields(&v[..])) } ty::ty_enum(_, substs) => { @@ -1368,7 +1368,7 @@ pub fn with_field_tys<'tcx, R, F>(tcx: &ty::ctxt<'tcx>, tcx.sess.bug(&format!( "cannot get field types from the enum type {} \ without a node ID", - ty.repr(tcx))[]); + ty.repr(tcx))); } Some(node_id) => { let def = tcx.def_map.borrow()[node_id].clone(); @@ -1392,7 +1392,7 @@ pub fn with_field_tys<'tcx, R, F>(tcx: &ty::ctxt<'tcx>, _ => { tcx.sess.bug(&format!( "cannot get field types from the type {}", - ty.repr(tcx))[]); + ty.repr(tcx))); } } } @@ -2097,7 +2097,7 @@ fn trans_imm_cast<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, t_in.repr(bcx.tcx()), k_in, t_out.repr(bcx.tcx()), - k_out)[]) + k_out)) } } } @@ -2106,7 +2106,7 @@ fn trans_imm_cast<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, t_in.repr(bcx.tcx()), k_in, t_out.repr(bcx.tcx()), - k_out)[]) + k_out)) }; return immediate_rvalue_bcx(bcx, newval, t_out).to_expr_datumblock(); } @@ -2272,7 +2272,7 @@ fn deref_once<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, bcx.tcx().sess.span_bug( expr.span, &format!("deref invoked on expr of illegal type {}", - datum.ty.repr(bcx.tcx()))[]); + datum.ty.repr(bcx.tcx()))); } }; diff --git a/src/librustc_trans/trans/foreign.rs b/src/librustc_trans/trans/foreign.rs index 4508fe21a65fa..efae76c5ef41c 100644 --- a/src/librustc_trans/trans/foreign.rs +++ b/src/librustc_trans/trans/foreign.rs @@ -111,7 +111,7 @@ pub fn register_static(ccx: &CrateContext, let llty = type_of::type_of(ccx, ty); let ident = link_name(foreign_item); - match attr::first_attr_value_str_by_name(&foreign_item.attrs[], + match attr::first_attr_value_str_by_name(&foreign_item.attrs, "linkage") { // If this is a static with a linkage specified, then we need to handle // it a little specially. The typesystem prevents things like &T and @@ -240,11 +240,11 @@ pub fn trans_native_call<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, let fn_sig = ty::erase_late_bound_regions(ccx.tcx(), fn_sig); let llsig = foreign_signature(ccx, &fn_sig, &passed_arg_tys[..]); let fn_type = cabi::compute_abi_info(ccx, - &llsig.llarg_tys[], + &llsig.llarg_tys, llsig.llret_ty, llsig.ret_def); - let arg_tys: &[cabi::ArgType] = &fn_type.arg_tys[]; + let arg_tys: &[cabi::ArgType] = &fn_type.arg_tys; let mut llargs_foreign = Vec::new(); @@ -439,7 +439,7 @@ fn gate_simd_ffi(tcx: &ty::ctxt, decl: &ast::FnDecl, ty: &ty::BareFnTy) { tcx.sess.span_err(ast_ty.span, &format!("use of SIMD type `{}` in FFI is highly experimental and \ may result in invalid code", - pprust::ty_to_string(ast_ty))[]); + pprust::ty_to_string(ast_ty))); tcx.sess.span_help(ast_ty.span, "add #![feature(simd_ffi)] to the crate attributes to enable"); } @@ -603,7 +603,7 @@ pub fn trans_rust_fn_with_foreign_abi<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, ccx.sess().bug(&format!("build_rust_fn: extern fn {} has ty {}, \ expected a bare fn ty", ccx.tcx().map.path_to_string(id), - t.repr(tcx))[]); + t.repr(tcx))); } }; @@ -868,9 +868,9 @@ pub fn trans_rust_fn_with_foreign_abi<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, // the massive simplifications that have occurred. pub fn link_name(i: &ast::ForeignItem) -> InternedString { - match attr::first_attr_value_str_by_name(&i.attrs[], "link_name") { + match attr::first_attr_value_str_by_name(&i.attrs, "link_name") { Some(ln) => ln.clone(), - None => match weak_lang_items::link_name(&i.attrs[]) { + None => match weak_lang_items::link_name(&i.attrs) { Some(name) => name, None => token::get_ident(i.ident), } @@ -913,7 +913,7 @@ fn foreign_types_for_fn_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, let fn_sig = ty::erase_late_bound_regions(ccx.tcx(), fn_sig); let llsig = foreign_signature(ccx, &fn_sig, &fn_sig.inputs); let fn_ty = cabi::compute_abi_info(ccx, - &llsig.llarg_tys[], + &llsig.llarg_tys, llsig.llret_ty, llsig.ret_def); debug!("foreign_types_for_fn_ty(\ @@ -922,7 +922,7 @@ fn foreign_types_for_fn_ty<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, fn_ty={} -> {}, \ ret_def={}", ty.repr(ccx.tcx()), - ccx.tn().types_to_str(&llsig.llarg_tys[]), + ccx.tn().types_to_str(&llsig.llarg_tys), ccx.tn().type_to_string(llsig.llret_ty), ccx.tn().types_to_str(&fn_ty.arg_tys.iter().map(|t| t.ty).collect::>()), ccx.tn().type_to_string(fn_ty.ret_ty.ty), diff --git a/src/librustc_trans/trans/glue.rs b/src/librustc_trans/trans/glue.rs index 268b65c6ceb30..20956bf795b85 100644 --- a/src/librustc_trans/trans/glue.rs +++ b/src/librustc_trans/trans/glue.rs @@ -243,7 +243,7 @@ fn trans_struct_drop<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, sig.inputs[0] } _ => bcx.sess().bug(&format!("Expected function type, found {}", - bcx.ty_to_string(fty))[]) + bcx.ty_to_string(fty))) }; let (struct_data, info) = if type_is_sized(bcx.tcx(), t) { @@ -370,7 +370,7 @@ fn size_and_align_of_dst<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, t: Ty<'tcx>, info: C_uint(bcx.ccx(), unit_align)) } _ => bcx.sess().bug(&format!("Unexpected unsized type, found {}", - bcx.ty_to_string(t))[]) + bcx.ty_to_string(t))) } } @@ -443,7 +443,7 @@ fn make_drop_glue<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, v0: ValueRef, t: Ty<'tcx>) bcx.sess().warn(&format!("Ignoring drop flag in destructor for {}\ because the struct is unsized. See issue\ #16758", - bcx.ty_to_string(t))[]); + bcx.ty_to_string(t))); trans_struct_drop(bcx, t, v0, dtor, did, substs) } } @@ -521,7 +521,7 @@ pub fn declare_tydesc<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, t: Ty<'tcx>) note_unique_llvm_symbol(ccx, name); let ty_name = token::intern_and_get_ident( - &ppaux::ty_to_string(ccx.tcx(), t)[]); + &ppaux::ty_to_string(ccx.tcx(), t)); let ty_name = C_str_slice(ccx, ty_name); debug!("--- declare_tydesc {}", ppaux::ty_to_string(ccx.tcx(), t)); @@ -540,7 +540,7 @@ fn declare_generic_glue<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, t: Ty<'tcx>, let fn_nm = mangle_internal_name_by_type_and_seq( ccx, t, - &format!("glue_{}", name)[]); + &format!("glue_{}", name)); let llfn = decl_cdecl_fn(ccx, &fn_nm[..], llfnty, ty::mk_nil(ccx.tcx())); note_unique_llvm_symbol(ccx, fn_nm.clone()); return (fn_nm, llfn); diff --git a/src/librustc_trans/trans/intrinsic.rs b/src/librustc_trans/trans/intrinsic.rs index a1b66ed94f06b..993c9eae45bf6 100644 --- a/src/librustc_trans/trans/intrinsic.rs +++ b/src/librustc_trans/trans/intrinsic.rs @@ -36,7 +36,7 @@ use syntax::parse::token; use util::ppaux::{Repr, ty_to_string}; pub fn get_simple_intrinsic(ccx: &CrateContext, item: &ast::ForeignItem) -> Option { - let name = match &token::get_ident(item.ident)[] { + let name = match &token::get_ident(item.ident)[..] { "sqrtf32" => "llvm.sqrt.f32", "sqrtf64" => "llvm.sqrt.f64", "powif32" => "llvm.powi.f32", diff --git a/src/librustc_trans/trans/meth.rs b/src/librustc_trans/trans/meth.rs index 1d5d24a64036d..65d8f8ec3614d 100644 --- a/src/librustc_trans/trans/meth.rs +++ b/src/librustc_trans/trans/meth.rs @@ -79,7 +79,7 @@ pub fn trans_impl(ccx: &CrateContext, match *impl_item { ast::MethodImplItem(ref method) => { if method.pe_generics().ty_params.len() == 0 { - let trans_everywhere = attr::requests_inline(&method.attrs[]); + let trans_everywhere = attr::requests_inline(&method.attrs); for (ref ccx, is_origin) in ccx.maybe_iter(trans_everywhere) { let llfn = get_item_val(ccx, method.id); let empty_substs = tcx.mk_substs(Substs::trans_empty()); @@ -305,7 +305,7 @@ pub fn trans_static_method_callee<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, } _ => { tcx.sess.bug(&format!("static call to invalid vtable: {}", - vtbl.repr(tcx))[]); + vtbl.repr(tcx))); } } } @@ -393,7 +393,7 @@ fn trans_monomorphized_callee<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, traits::VtableParam(..) => { bcx.sess().bug( &format!("resolved vtable bad vtable {} in trans", - vtable.repr(bcx.tcx()))[]); + vtable.repr(bcx.tcx()))); } } } @@ -749,7 +749,7 @@ pub fn get_vtable<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, tcx.sess.bug( &format!("resolved vtable for {} to bad vtable {} in trans", trait_ref.repr(tcx), - vtable.repr(tcx))[]); + vtable.repr(tcx))); } } }); diff --git a/src/librustc_trans/trans/monomorphize.rs b/src/librustc_trans/trans/monomorphize.rs index ec48ab0d34a06..5ab1ec2a69eda 100644 --- a/src/librustc_trans/trans/monomorphize.rs +++ b/src/librustc_trans/trans/monomorphize.rs @@ -177,7 +177,7 @@ pub fn monomorphic_fn<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, .. } => { let d = mk_lldecl(abi); - let needs_body = setup_lldecl(d, &i.attrs[]); + let needs_body = setup_lldecl(d, &i.attrs); if needs_body { if abi != abi::Rust { foreign::trans_rust_fn_with_foreign_abi( @@ -220,7 +220,7 @@ pub fn monomorphic_fn<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, match *ii { ast::MethodImplItem(ref mth) => { let d = mk_lldecl(abi::Rust); - let needs_body = setup_lldecl(d, &mth.attrs[]); + let needs_body = setup_lldecl(d, &mth.attrs); if needs_body { trans_fn(ccx, mth.pe_fn_decl(), @@ -241,7 +241,7 @@ pub fn monomorphic_fn<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, match *method { ast::ProvidedMethod(ref mth) => { let d = mk_lldecl(abi::Rust); - let needs_body = setup_lldecl(d, &mth.attrs[]); + let needs_body = setup_lldecl(d, &mth.attrs); if needs_body { trans_fn(ccx, mth.pe_fn_decl(), mth.pe_body(), d, psubsts, mth.id, &[]); @@ -250,7 +250,7 @@ pub fn monomorphic_fn<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, } _ => { ccx.sess().bug(&format!("can't monomorphize a {:?}", - map_node)[]) + map_node)) } } } @@ -258,7 +258,7 @@ pub fn monomorphic_fn<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, let d = mk_lldecl(abi::Rust); set_inline_hint(d); base::trans_tuple_struct(ccx, - &struct_def.fields[], + &struct_def.fields, struct_def.ctor_id.expect("ast-mapped tuple struct \ didn't have a ctor id"), psubsts, @@ -276,7 +276,7 @@ pub fn monomorphic_fn<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, ast_map::NodePat(..) | ast_map::NodeLocal(..) => { ccx.sess().bug(&format!("can't monomorphize a {:?}", - map_node)[]) + map_node)) } }; diff --git a/src/librustc_trans/trans/type_.rs b/src/librustc_trans/trans/type_.rs index ad83135a0d46f..0c69a7132a77b 100644 --- a/src/librustc_trans/trans/type_.rs +++ b/src/librustc_trans/trans/type_.rs @@ -109,7 +109,7 @@ impl Type { } pub fn int(ccx: &CrateContext) -> Type { - match &ccx.tcx().sess.target.target.target_pointer_width[] { + match &ccx.tcx().sess.target.target.target_pointer_width[..] { "32" => Type::i32(ccx), "64" => Type::i64(ccx), tws => panic!("Unsupported target word size for int: {}", tws), diff --git a/src/librustc_trans/trans/type_of.rs b/src/librustc_trans/trans/type_of.rs index 489b56bbe6825..b53ea79804c08 100644 --- a/src/librustc_trans/trans/type_of.rs +++ b/src/librustc_trans/trans/type_of.rs @@ -185,7 +185,7 @@ pub fn sizing_type_of<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, t: Ty<'tcx>) -> Typ let llsizingty = match t.sty { _ if !lltype_is_sized(cx.tcx(), t) => { cx.sess().bug(&format!("trying to take the sizing type of {}, an unsized type", - ppaux::ty_to_string(cx.tcx(), t))[]) + ppaux::ty_to_string(cx.tcx(), t))) } ty::ty_bool => Type::bool(cx), @@ -238,7 +238,7 @@ pub fn sizing_type_of<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, t: Ty<'tcx>) -> Typ ty::ty_projection(..) | ty::ty_infer(..) | ty::ty_param(..) | ty::ty_err(..) => { cx.sess().bug(&format!("fictitious type {} in sizing_type_of()", - ppaux::ty_to_string(cx.tcx(), t))[]) + ppaux::ty_to_string(cx.tcx(), t))) } ty::ty_vec(_, None) | ty::ty_trait(..) | ty::ty_str => panic!("unreachable") }; @@ -418,7 +418,7 @@ pub fn type_of<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>, t: Ty<'tcx>) -> Type { } ty::ty_trait(..) => Type::opaque_trait(cx), _ => cx.sess().bug(&format!("ty_open with sized type: {}", - ppaux::ty_to_string(cx.tcx(), t))[]) + ppaux::ty_to_string(cx.tcx(), t))) }, ty::ty_infer(..) => cx.sess().bug("type_of with ty_infer"), diff --git a/src/librustc_typeck/astconv.rs b/src/librustc_typeck/astconv.rs index e3c1c66f78c9c..afdc414c163ce 100644 --- a/src/librustc_typeck/astconv.rs +++ b/src/librustc_typeck/astconv.rs @@ -195,7 +195,7 @@ pub fn opt_ast_region_to_region<'tcx>( help_name } else { format!("one of {}'s {} elided lifetimes", help_name, n) - })[]); + })[..]); if len == 2 && i == 0 { m.push_str(" or "); @@ -876,7 +876,7 @@ pub fn ast_ty_to_builtin_ty<'tcx>( .sess .span_bug(ast_ty.span, &format!("unbound path {}", - path.repr(this.tcx()))[]) + path.repr(this.tcx()))) } Some(&d) => d }; @@ -898,7 +898,7 @@ pub fn ast_ty_to_builtin_ty<'tcx>( this.tcx().sess.span_bug( path.span, &format!("converting `Box` to `{}`", - ty.repr(this.tcx()))[]); + ty.repr(this.tcx()))); } } } @@ -1206,7 +1206,7 @@ pub fn ast_ty_to_ty<'tcx>(this: &AstConv<'tcx>, tcx.sess .span_bug(ast_ty.span, &format!("unbound path {}", - path.repr(tcx))[]) + path.repr(tcx))) } Some(&d) => d }; @@ -1419,7 +1419,7 @@ fn ty_of_method_or_bare_fn<'a, 'tcx>(this: &AstConv<'tcx>, let input_params = if self_ty.is_some() { &decl.inputs[1..] } else { - &decl.inputs[] + &decl.inputs[..] }; let input_tys = input_params.iter().map(|a| ty_of_arg(this, &rb, a, None)); let input_pats: Vec = input_params.iter() diff --git a/src/librustc_typeck/check/implicator.rs b/src/librustc_typeck/check/implicator.rs index 4aaaf4ffe5ab3..998a4504088a3 100644 --- a/src/librustc_typeck/check/implicator.rs +++ b/src/librustc_typeck/check/implicator.rs @@ -165,7 +165,7 @@ impl<'a, 'tcx> Implicator<'a, 'tcx> { ty::ty_open(_) => { self.tcx().sess.bug( &format!("Unexpected type encountered while doing wf check: {}", - ty.repr(self.tcx()))[]); + ty.repr(self.tcx()))); } } } diff --git a/src/librustc_typeck/check/method/confirm.rs b/src/librustc_typeck/check/method/confirm.rs index dfbfc86c65970..53976df75d616 100644 --- a/src/librustc_typeck/check/method/confirm.rs +++ b/src/librustc_typeck/check/method/confirm.rs @@ -331,7 +331,7 @@ impl<'a,'tcx> ConfirmContext<'a,'tcx> { self.tcx().sess.span_bug( self.span, &format!("self-type `{}` for ObjectPick never dereferenced to an object", - self_ty.repr(self.tcx()))[]) + self_ty.repr(self.tcx()))) } } } @@ -386,7 +386,7 @@ impl<'a,'tcx> ConfirmContext<'a,'tcx> { &format!( "{} was a subtype of {} but now is not?", self_ty.repr(self.tcx()), - method_self_ty.repr(self.tcx()))[]); + method_self_ty.repr(self.tcx()))); } } } diff --git a/src/librustc_typeck/check/method/mod.rs b/src/librustc_typeck/check/method/mod.rs index ffbc8ad020ab7..ed86925bd57da 100644 --- a/src/librustc_typeck/check/method/mod.rs +++ b/src/librustc_typeck/check/method/mod.rs @@ -272,7 +272,7 @@ pub fn lookup_in_trait_adjusted<'a, 'tcx>(fcx: &FnCtxt<'a, 'tcx>, span, &format!( "trait method is &self but first arg is: {}", - transformed_self_ty.repr(fcx.tcx()))[]); + transformed_self_ty.repr(fcx.tcx()))); } } } @@ -282,7 +282,7 @@ pub fn lookup_in_trait_adjusted<'a, 'tcx>(fcx: &FnCtxt<'a, 'tcx>, span, &format!( "unexpected explicit self type in operator method: {:?}", - method_ty.explicit_self)[]); + method_ty.explicit_self)); } } } diff --git a/src/librustc_typeck/check/method/probe.rs b/src/librustc_typeck/check/method/probe.rs index 978fbbbcffc33..1cc4fe37fbddd 100644 --- a/src/librustc_typeck/check/method/probe.rs +++ b/src/librustc_typeck/check/method/probe.rs @@ -878,7 +878,7 @@ impl<'a,'tcx> ProbeContext<'a,'tcx> { debug!("pick_method(self_ty={})", self.infcx().ty_to_string(self_ty)); debug!("searching inherent candidates"); - match self.consider_candidates(self_ty, &self.inherent_candidates[]) { + match self.consider_candidates(self_ty, &self.inherent_candidates) { None => {} Some(pick) => { return Some(pick); @@ -886,7 +886,7 @@ impl<'a,'tcx> ProbeContext<'a,'tcx> { } debug!("searching extension candidates"); - self.consider_candidates(self_ty, &self.extension_candidates[]) + self.consider_candidates(self_ty, &self.extension_candidates) } fn consider_candidates(&self, diff --git a/src/librustc_typeck/check/method/suggest.rs b/src/librustc_typeck/check/method/suggest.rs index 1639772103b7a..f5a03f0721a65 100644 --- a/src/librustc_typeck/check/method/suggest.rs +++ b/src/librustc_typeck/check/method/suggest.rs @@ -71,7 +71,7 @@ pub fn report_error<'a, 'tcx>(fcx: &FnCtxt<'a, 'tcx>, if is_field { cx.sess.span_note(span, &format!("use `(s.{0})(...)` if you meant to call the \ - function stored in the `{0}` field", method_ustring)[]); + function stored in the `{0}` field", method_ustring)); } if static_sources.len() > 0 { diff --git a/src/librustc_typeck/check/mod.rs b/src/librustc_typeck/check/mod.rs index e443b4d0e606a..0430954ad7e2f 100644 --- a/src/librustc_typeck/check/mod.rs +++ b/src/librustc_typeck/check/mod.rs @@ -117,7 +117,7 @@ use std::iter::repeat; use std::slice; use syntax::{self, abi, attr}; use syntax::attr::AttrMetaMethods; -use syntax::ast::{self, ProvidedMethod, RequiredMethod, TypeTraitItem, DefId}; +use syntax::ast::{self, ProvidedMethod, RequiredMethod, TypeTraitItem, DefId, Visibility}; use syntax::ast_util::{self, local_def, PostExpansionMethod}; use syntax::codemap::{self, Span}; use syntax::owned_slice::OwnedSlice; @@ -615,7 +615,7 @@ fn check_fn<'a, 'tcx>(ccx: &'a CrateCtxt<'a, 'tcx>, let tcx = ccx.tcx; let err_count_on_creation = tcx.sess.err_count(); - let arg_tys = &fn_sig.inputs[]; + let arg_tys = &fn_sig.inputs; let ret_ty = fn_sig.output; debug!("check_fn(arg_tys={}, ret_ty={}, fn_id={})", @@ -713,7 +713,7 @@ pub fn check_item<'a,'tcx>(ccx: &CrateCtxt<'a,'tcx>, it: &'tcx ast::Item) { ast::ItemEnum(ref enum_definition, _) => { check_enum_variants(ccx, it.span, - &enum_definition.variants[], + &enum_definition.variants, it.id); } ast::ItemFn(ref decl, _, _, _, ref body) => { @@ -1334,7 +1334,7 @@ impl<'a, 'tcx> FnCtxt<'a, 'tcx> { self.tcx().sess.span_bug( span, &format!("no type for local variable {}", - nid)[]); + nid)); } } } @@ -1707,7 +1707,7 @@ impl<'a, 'tcx> FnCtxt<'a, 'tcx> { Some(&t) => t, None => { self.tcx().sess.bug(&format!("no type for expr in fcx {}", - self.tag())[]); + self.tag())); } } } @@ -1739,7 +1739,7 @@ impl<'a, 'tcx> FnCtxt<'a, 'tcx> { self.tcx().sess.bug( &format!("no type for node {}: {} in fcx {}", id, self.tcx().map.node_to_string(id), - self.tag())[]); + self.tag())); } } } @@ -2275,7 +2275,7 @@ fn check_argument_types<'a, 'tcx>(fcx: &FnCtxt<'a, 'tcx>, if arg_types.len() == 1 {""} else {"s"}, args.len(), if args.len() == 1 {" was"} else {"s were"}); - expected_arg_tys = &[][]; + expected_arg_tys = &[]; err_args(fcx.tcx(), args.len()) } else { expected_arg_tys = match expected_arg_tys.get(0) { @@ -2292,7 +2292,7 @@ fn check_argument_types<'a, 'tcx>(fcx: &FnCtxt<'a, 'tcx>, span_err!(tcx.sess, sp, E0059, "cannot use call notation; the first type parameter \ for the function trait is neither a tuple nor unit"); - expected_arg_tys = &[][]; + expected_arg_tys = &[]; err_args(fcx.tcx(), args.len()) } } @@ -2309,7 +2309,7 @@ fn check_argument_types<'a, 'tcx>(fcx: &FnCtxt<'a, 'tcx>, if expected_arg_count == 1 {""} else {"s"}, supplied_arg_count, if supplied_arg_count == 1 {" was"} else {"s were"}); - expected_arg_tys = &[][]; + expected_arg_tys = &[]; err_args(fcx.tcx(), supplied_arg_count) } } else { @@ -2319,7 +2319,7 @@ fn check_argument_types<'a, 'tcx>(fcx: &FnCtxt<'a, 'tcx>, if expected_arg_count == 1 {""} else {"s"}, supplied_arg_count, if supplied_arg_count == 1 {" was"} else {"s were"}); - expected_arg_tys = &[][]; + expected_arg_tys = &[]; err_args(fcx.tcx(), supplied_arg_count) }; @@ -2809,7 +2809,7 @@ fn check_expr_with_unifier<'a, 'tcx, F>(fcx: &FnCtxt<'a, 'tcx>, }; let args = match rhs { Some(rhs) => slice::ref_slice(rhs), - None => &[][] + None => &[][..] }; match method { Some(method) => { @@ -3117,6 +3117,10 @@ fn check_expr_with_unifier<'a, 'tcx, F>(fcx: &FnCtxt<'a, 'tcx>, if skip.iter().any(|&x| x == n) { continue; } + // ignore private fields from non-local crates + if id.krate != ast::LOCAL_CRATE && elem.vis != Visibility::Public { + continue; + } let dist = lev_distance(n, name); if dist < best_dist { best = Some(n); @@ -4584,7 +4588,7 @@ pub fn check_enum_variants<'a,'tcx>(ccx: &CrateCtxt<'a,'tcx>, } let hint = *ty::lookup_repr_hints(ccx.tcx, ast::DefId { krate: ast::LOCAL_CRATE, node: id }) - [].get(0).unwrap_or(&attr::ReprAny); + .get(0).unwrap_or(&attr::ReprAny); if hint != attr::ReprAny && vs.len() <= 1 { if vs.len() == 1 { diff --git a/src/librustc_typeck/check/regionck.rs b/src/librustc_typeck/check/regionck.rs index 82abff8c425f4..f6ac1ddee4976 100644 --- a/src/librustc_typeck/check/regionck.rs +++ b/src/librustc_typeck/check/regionck.rs @@ -188,7 +188,7 @@ fn region_of_def(fcx: &FnCtxt, def: def::Def) -> ty::Region { } _ => { tcx.sess.bug(&format!("unexpected def in region_of_def: {:?}", - def)[]) + def)) } } } @@ -288,7 +288,7 @@ impl<'a, 'tcx> Rcx<'a, 'tcx> { Some(f) => f, None => { self.tcx().sess.bug( - &format!("No fn-sig entry for id={}", id)[]); + &format!("No fn-sig entry for id={}", id)); } }; @@ -1013,7 +1013,7 @@ fn constrain_autoderefs<'a, 'tcx>(rcx: &mut Rcx<'a, 'tcx>, rcx.tcx().sess.span_bug( deref_expr.span, &format!("bad overloaded deref type {}", - method.ty.repr(rcx.tcx()))[]) + method.ty.repr(rcx.tcx()))) } }; @@ -1417,7 +1417,7 @@ fn link_reborrowed_region<'a, 'tcx>(rcx: &Rcx<'a, 'tcx>, rcx.tcx().sess.span_bug( span, &format!("Illegal upvar id: {}", - upvar_id.repr(rcx.tcx()))[]); + upvar_id.repr(rcx.tcx()))); } } } @@ -1562,7 +1562,7 @@ fn generic_must_outlive<'a, 'tcx>(rcx: &Rcx<'a, 'tcx>, GenericKind::Param(..) => { } GenericKind::Projection(ref projection_ty) => { param_bounds.push_all( - &projection_bounds(rcx, origin.span(), projection_ty)[]); + &projection_bounds(rcx, origin.span(), projection_ty)); } } diff --git a/src/librustc_typeck/check/wf.rs b/src/librustc_typeck/check/wf.rs index 2601c4d275291..b0ded25af1700 100644 --- a/src/librustc_typeck/check/wf.rs +++ b/src/librustc_typeck/check/wf.rs @@ -393,7 +393,7 @@ impl<'ccx, 'tcx> CheckTypeWellFormedVisitor<'ccx, 'tcx> { self.tcx().sess.span_err( span, &format!("parameter `{}` is never used", - param_name.user_string(self.tcx()))[]); + param_name.user_string(self.tcx()))); match suggested_marker_id { Some(def_id) => { diff --git a/src/librustc_typeck/coherence/mod.rs b/src/librustc_typeck/coherence/mod.rs index 1542e74ff8167..7215ab01b02de 100644 --- a/src/librustc_typeck/coherence/mod.rs +++ b/src/librustc_typeck/coherence/mod.rs @@ -86,7 +86,7 @@ fn get_base_type_def_id<'a, 'tcx>(inference_context: &InferCtxt<'a, 'tcx>, inference_context.tcx.sess.span_bug( span, &format!("coherence encountered unexpected type searching for base type: {}", - ty.repr(inference_context.tcx))[]); + ty.repr(inference_context.tcx))); } } } diff --git a/src/librustc_typeck/collect.rs b/src/librustc_typeck/collect.rs index 0b78af18e2617..bb5566ab131ec 100644 --- a/src/librustc_typeck/collect.rs +++ b/src/librustc_typeck/collect.rs @@ -899,7 +899,7 @@ fn get_trait_def<'a, 'tcx>(ccx: &CollectCtxt<'a, 'tcx>, ast_map::NodeItem(item) => trait_def_of_item(ccx, &*item), _ => { tcx.sess.bug(&format!("get_trait_def({}): not an item", - trait_id.node)[]) + trait_id.node)) } } } @@ -925,7 +925,7 @@ fn trait_def_of_item<'a, 'tcx>(ccx: &CollectCtxt<'a, 'tcx>, ref s => { tcx.sess.span_bug( it.span, - &format!("trait_def_of_item invoked on {:?}", s)[]); + &format!("trait_def_of_item invoked on {:?}", s)); } }; @@ -1025,7 +1025,7 @@ fn convert_trait_predicates<'a, 'tcx>(ccx: &CollectCtxt<'a, 'tcx>, it: &ast::Ite ref s => { tcx.sess.span_bug( it.span, - &format!("trait_def_of_item invoked on {:?}", s)[]); + &format!("trait_def_of_item invoked on {:?}", s)); } }; @@ -1284,8 +1284,8 @@ fn ty_generics_for_type_or_impl<'a, 'tcx>(ccx: &CollectCtxt<'a, 'tcx>, -> ty::Generics<'tcx> { ty_generics(ccx, subst::TypeSpace, - &generics.lifetimes[], - &generics.ty_params[], + &generics.lifetimes, + &generics.ty_params, &generics.where_clause, ty::Generics::empty()) } @@ -1314,8 +1314,8 @@ fn ty_generics_for_trait<'a, 'tcx>(ccx: &CollectCtxt<'a, 'tcx>, let mut generics = ty_generics(ccx, subst::TypeSpace, - &ast_generics.lifetimes[], - &ast_generics.ty_params[], + &ast_generics.lifetimes, + &ast_generics.ty_params, &ast_generics.where_clause, ty::Generics::empty()); @@ -1360,7 +1360,7 @@ fn ty_generics_for_fn_or_method<'a,'tcx>(ccx: &CollectCtxt<'a,'tcx>, ty_generics(ccx, subst::FnSpace, &early_lifetimes[..], - &generics.ty_params[], + &generics.ty_params, &generics.where_clause, base_generics) } @@ -1557,7 +1557,7 @@ fn get_or_create_type_parameter_def<'a,'tcx>(ccx: &CollectCtxt<'a,'tcx>, let param_ty = ty::ParamTy::new(space, index, param.ident.name); let bounds = compute_bounds(ccx, param_ty.to_ty(ccx.tcx), - ¶m.bounds[], + ¶m.bounds, SizedByDefault::Yes, param.span); let default = match param.default { @@ -1733,7 +1733,7 @@ fn check_bounds_compatible<'a,'tcx>(ccx: &CollectCtxt<'a,'tcx>, if !param_bounds.builtin_bounds.contains(&ty::BoundSized) { ty::each_bound_trait_and_supertraits( ccx.tcx, - ¶m_bounds.trait_bounds[], + ¶m_bounds.trait_bounds, |trait_ref| { let trait_def = ccx.get_trait_def(trait_ref.def_id()); if trait_def.bounds.builtin_bounds.contains(&ty::BoundSized) { diff --git a/src/librustc_typeck/lib.rs b/src/librustc_typeck/lib.rs index b5dca0bd4f6f4..78f13b37a8238 100644 --- a/src/librustc_typeck/lib.rs +++ b/src/librustc_typeck/lib.rs @@ -253,7 +253,7 @@ fn check_main_fn_ty(ccx: &CrateCtxt, &format!("main has a non-function type: found \ `{}`", ppaux::ty_to_string(tcx, - main_t))[]); + main_t))); } } } @@ -304,7 +304,7 @@ fn check_start_fn_ty(ccx: &CrateCtxt, tcx.sess.span_bug(start_span, &format!("start has a non-function type: found \ `{}`", - ppaux::ty_to_string(tcx, start_t))[]); + ppaux::ty_to_string(tcx, start_t))); } } } diff --git a/src/librustc_typeck/variance.rs b/src/librustc_typeck/variance.rs index 1adcf133bf3e0..cd4406b770dbf 100644 --- a/src/librustc_typeck/variance.rs +++ b/src/librustc_typeck/variance.rs @@ -595,7 +595,7 @@ impl<'a, 'tcx, 'v> Visitor<'v> for ConstraintContext<'a, 'tcx> { let trait_def = ty::lookup_trait_def(tcx, did); let predicates = ty::predicates(tcx, ty::mk_self_type(tcx), &trait_def.bounds); self.add_constraints_from_predicates(&trait_def.generics, - &predicates[], + &predicates, self.covariant); let trait_items = ty::trait_items(tcx, did); @@ -652,7 +652,7 @@ impl<'a, 'tcx> ConstraintContext<'a, 'tcx> { None => { self.tcx().sess.bug(&format!( "no inferred index entry for {}", - self.tcx().map.node_to_string(param_id))[]); + self.tcx().map.node_to_string(param_id))); } } } @@ -941,7 +941,7 @@ impl<'a, 'tcx> ConstraintContext<'a, 'tcx> { self.tcx().sess.bug( &format!("unexpected type encountered in \ variance inference: {}", - ty.repr(self.tcx()))[]); + ty.repr(self.tcx()))); } } } @@ -1071,7 +1071,7 @@ impl<'a, 'tcx> ConstraintContext<'a, 'tcx> { .sess .bug(&format!("unexpected region encountered in variance \ inference: {}", - region.repr(self.tcx()))[]); + region.repr(self.tcx()))); } } } diff --git a/src/librustdoc/clean/mod.rs b/src/librustdoc/clean/mod.rs index 7ef48378af183..19c34aff9a81e 100644 --- a/src/librustdoc/clean/mod.rs +++ b/src/librustdoc/clean/mod.rs @@ -2169,7 +2169,7 @@ impl Clean> for doctree::Import { // forcefully don't inline if this is not public or if the // #[doc(no_inline)] attribute is present. let denied = self.vis != ast::Public || self.attrs.iter().any(|a| { - &a.name()[] == "doc" && match a.meta_item_list() { + &a.name()[..] == "doc" && match a.meta_item_list() { Some(l) => attr::contains_name(l, "no_inline"), None => false, } diff --git a/src/librustdoc/html/format.rs b/src/librustdoc/html/format.rs index ed7f051408c45..11d9ecac14dd0 100644 --- a/src/librustdoc/html/format.rs +++ b/src/librustdoc/html/format.rs @@ -25,6 +25,7 @@ use clean; use stability_summary::ModuleSummary; use html::item_type::ItemType; use html::render; +use html::escape::Escape; use html::render::{cache, CURRENT_LOCATION_KEY}; /// Helper to render an optional visibility with a space after it (if the @@ -710,13 +711,14 @@ impl<'a> fmt::Display for Stability<'a> { let Stability(stab) = *self; match *stab { Some(ref stability) => { + let lvl = if stability.deprecated_since.is_empty() { + format!("{}", stability.level) + } else { + "Deprecated".to_string() + }; write!(f, "{lvl}", - lvl = if stability.deprecated_since.is_empty() { - format!("{}", stability.level) - } else { - "Deprecated".to_string() - }, - reason = stability.reason) + lvl = Escape(&*lvl), + reason = Escape(&*stability.reason)) } None => Ok(()) } @@ -728,14 +730,15 @@ impl<'a> fmt::Display for ConciseStability<'a> { let ConciseStability(stab) = *self; match *stab { Some(ref stability) => { + let lvl = if stability.deprecated_since.is_empty() { + format!("{}", stability.level) + } else { + "Deprecated".to_string() + }; write!(f, "", - lvl = if stability.deprecated_since.is_empty() { - format!("{}", stability.level) - } else { - "Deprecated".to_string() - }, + lvl = Escape(&*lvl), colon = if stability.reason.len() > 0 { ": " } else { "" }, - reason = stability.reason) + reason = Escape(&*stability.reason)) } None => { write!(f, "") diff --git a/src/librustdoc/html/highlight.rs b/src/librustdoc/html/highlight.rs index 44c0acda66fba..b88e5065b4f9c 100644 --- a/src/librustdoc/html/highlight.rs +++ b/src/librustdoc/html/highlight.rs @@ -142,7 +142,7 @@ fn doit(sess: &parse::ParseSess, mut lexer: lexer::StringReader, // keywords are also included in the identifier set token::Ident(ident, _is_mod_sep) => { - match &token::get_ident(ident)[] { + match &token::get_ident(ident)[..] { "ref" | "mut" => "kw-2", "self" => "self", diff --git a/src/librustdoc/html/static/main.css b/src/librustdoc/html/static/main.css index 2f0755ecb898a..21b7de9ff7c82 100644 --- a/src/librustdoc/html/static/main.css +++ b/src/librustdoc/html/static/main.css @@ -374,8 +374,14 @@ a { color: #000; background: transparent; } -.docblock a { color: #4e8bca; } -.docblock a:hover { text-decoration: underline; } + +.docblock a { + color: #4e8bca; +} + +.docblock a:hover { + text-decoration: underline; +} .content span.trait, .content a.trait, .block a.current.trait { color: #ed9603; } .content span.mod, .content a.mod, block a.current.mod { color: #4d76ae; } @@ -529,10 +535,19 @@ pre.rust { position: relative; } margin: 0 0 -5px; padding: 0; } + .section-header:hover a:after { content: '\2002\00a7\2002'; } +.section-header:hover a { + text-decoration: none; +} + +.section-header a { + color: inherit; +} + .collapse-toggle { font-weight: 300; position: absolute; diff --git a/src/librustdoc/visit_ast.rs b/src/librustdoc/visit_ast.rs index c52b0bab1fa8b..9181682d176d6 100644 --- a/src/librustdoc/visit_ast.rs +++ b/src/librustdoc/visit_ast.rs @@ -253,7 +253,7 @@ impl<'a, 'tcx> RustdocVisitor<'a, 'tcx> { let please_inline = item.attrs.iter().any(|item| { match item.meta_item_list() { Some(list) => { - list.iter().any(|i| &i.name()[] == "inline") + list.iter().any(|i| &i.name()[..] == "inline") } None => false, } diff --git a/src/libserialize/collection_impls.rs b/src/libserialize/collection_impls.rs index 10cf02f85e818..e7430f698e9c9 100644 --- a/src/libserialize/collection_impls.rs +++ b/src/libserialize/collection_impls.rs @@ -13,7 +13,6 @@ use std::usize; use std::default::Default; use std::hash::Hash; -#[cfg(stage0)] use std::hash::Hasher; use std::collections::hash_state::HashState; use {Decodable, Encodable, Decoder, Encoder}; @@ -158,26 +157,6 @@ impl< } } -#[cfg(stage0)] -impl Encodable for HashMap - where K: Encodable + Hash< ::Hasher> + Eq, - V: Encodable, - S: HashState, - ::Hasher: Hasher -{ - fn encode(&self, e: &mut E) -> Result<(), E::Error> { - e.emit_map(self.len(), |e| { - let mut i = 0; - for (key, val) in self { - try!(e.emit_map_elt_key(i, |e| key.encode(e))); - try!(e.emit_map_elt_val(i, |e| val.encode(e))); - i += 1; - } - Ok(()) - }) - } -} -#[cfg(not(stage0))] impl Encodable for HashMap where K: Encodable + Hash + Eq, V: Encodable, @@ -196,27 +175,6 @@ impl Encodable for HashMap } } -#[cfg(stage0)] -impl Decodable for HashMap - where K: Decodable + Hash< ::Hasher> + Eq, - V: Decodable, - S: HashState + Default, - ::Hasher: Hasher -{ - fn decode(d: &mut D) -> Result, D::Error> { - d.read_map(|d, len| { - let state = Default::default(); - let mut map = HashMap::with_capacity_and_hash_state(len, state); - for i in 0..len { - let key = try!(d.read_map_elt_key(i, |d| Decodable::decode(d))); - let val = try!(d.read_map_elt_val(i, |d| Decodable::decode(d))); - map.insert(key, val); - } - Ok(map) - }) - } -} -#[cfg(not(stage0))] impl Decodable for HashMap where K: Decodable + Hash + Eq, V: Decodable, @@ -236,24 +194,6 @@ impl Decodable for HashMap } } -#[cfg(stage0)] -impl Encodable for HashSet - where T: Encodable + Hash< ::Hasher> + Eq, - S: HashState, - ::Hasher: Hasher -{ - fn encode(&self, s: &mut E) -> Result<(), E::Error> { - s.emit_seq(self.len(), |s| { - let mut i = 0; - for e in self { - try!(s.emit_seq_elt(i, |s| e.encode(s))); - i += 1; - } - Ok(()) - }) - } -} -#[cfg(not(stage0))] impl Encodable for HashSet where T: Encodable + Hash + Eq, S: HashState, @@ -270,24 +210,6 @@ impl Encodable for HashSet } } -#[cfg(stage0)] -impl Decodable for HashSet - where T: Decodable + Hash< ::Hasher> + Eq, - S: HashState + Default, - ::Hasher: Hasher -{ - fn decode(d: &mut D) -> Result, D::Error> { - d.read_seq(|d, len| { - let state = Default::default(); - let mut set = HashSet::with_capacity_and_hash_state(len, state); - for i in 0..len { - set.insert(try!(d.read_seq_elt(i, |d| Decodable::decode(d)))); - } - Ok(set) - }) - } -} -#[cfg(not(stage0))] impl Decodable for HashSet where T: Decodable + Hash + Eq, S: HashState + Default, diff --git a/src/libstd/collections/hash/map.rs b/src/libstd/collections/hash/map.rs index ade4f1f0533ee..f5d2b8aed29df 100644 --- a/src/libstd/collections/hash/map.rs +++ b/src/libstd/collections/hash/map.rs @@ -88,7 +88,6 @@ impl DefaultResizePolicy { #[test] fn test_resize_policy() { - use prelude::v1::*; let rp = DefaultResizePolicy; for n in 0..1000 { assert!(rp.min_capacity(rp.usable_capacity(n)) <= n); @@ -2256,6 +2255,7 @@ mod test_map { #[test] fn test_entry_take_doesnt_corrupt() { + #![allow(deprecated)] //rand // Test for #19292 fn check(m: &HashMap) { for k in m.keys() { diff --git a/src/libstd/collections/hash/map_stage0.rs b/src/libstd/collections/hash/map_stage0.rs deleted file mode 100644 index f9e5044c59761..0000000000000 --- a/src/libstd/collections/hash/map_stage0.rs +++ /dev/null @@ -1,2330 +0,0 @@ -// Copyright 2014-2015 The Rust Project Developers. See the COPYRIGHT -// file at the top-level directory of this distribution and at -// http://rust-lang.org/COPYRIGHT. -// -// Licensed under the Apache License, Version 2.0 or the MIT license -// , at your -// option. This file may not be copied, modified, or distributed -// except according to those terms. -// -// ignore-lexer-test FIXME #15883 - -use self::Entry::*; -use self::SearchResult::*; -use self::VacantEntryState::*; - -use borrow::Borrow; -use clone::Clone; -use cmp::{max, Eq, PartialEq}; -use default::Default; -use fmt::{self, Debug}; -use hash::{self, Hash, SipHasher}; -use iter::{self, Iterator, ExactSizeIterator, IntoIterator, IteratorExt, FromIterator, Extend, Map}; -use marker::Sized; -use mem::{self, replace}; -use num::{Int, UnsignedInt}; -use ops::{Deref, FnMut, Index, IndexMut}; -use option::Option::{self, Some, None}; -use rand::{self, Rng}; -use result::Result::{self, Ok, Err}; - -use super::table::{ - self, - Bucket, - EmptyBucket, - FullBucket, - FullBucketImm, - FullBucketMut, - RawTable, - SafeHash -}; -use super::table::BucketState::{ - Empty, - Full, -}; -use super::state::HashState; - -const INITIAL_LOG2_CAP: usize = 5; -#[unstable(feature = "std_misc")] -pub const INITIAL_CAPACITY: usize = 1 << INITIAL_LOG2_CAP; // 2^5 - -/// The default behavior of HashMap implements a load factor of 90.9%. -/// This behavior is characterized by the following condition: -/// -/// - if size > 0.909 * capacity: grow the map -#[derive(Clone)] -struct DefaultResizePolicy; - -impl DefaultResizePolicy { - fn new() -> DefaultResizePolicy { - DefaultResizePolicy - } - - #[inline] - fn min_capacity(&self, usable_size: usize) -> usize { - // Here, we are rephrasing the logic by specifying the lower limit - // on capacity: - // - // - if `cap < size * 1.1`: grow the map - usable_size * 11 / 10 - } - - /// An inverse of `min_capacity`, approximately. - #[inline] - fn usable_capacity(&self, cap: usize) -> usize { - // As the number of entries approaches usable capacity, - // min_capacity(size) must be smaller than the internal capacity, - // so that the map is not resized: - // `min_capacity(usable_capacity(x)) <= x`. - // The left-hand side can only be smaller due to flooring by integer - // division. - // - // This doesn't have to be checked for overflow since allocation size - // in bytes will overflow earlier than multiplication by 10. - cap * 10 / 11 - } -} - -#[test] -fn test_resize_policy() { - use prelude::v1::*; - let rp = DefaultResizePolicy; - for n in 0..1000 { - assert!(rp.min_capacity(rp.usable_capacity(n)) <= n); - assert!(rp.usable_capacity(rp.min_capacity(n)) <= n); - } -} - -// The main performance trick in this hashmap is called Robin Hood Hashing. -// It gains its excellent performance from one essential operation: -// -// If an insertion collides with an existing element, and that element's -// "probe distance" (how far away the element is from its ideal location) -// is higher than how far we've already probed, swap the elements. -// -// This massively lowers variance in probe distance, and allows us to get very -// high load factors with good performance. The 90% load factor I use is rather -// conservative. -// -// > Why a load factor of approximately 90%? -// -// In general, all the distances to initial buckets will converge on the mean. -// At a load factor of α, the odds of finding the target bucket after k -// probes is approximately 1-α^k. If we set this equal to 50% (since we converge -// on the mean) and set k=8 (64-byte cache line / 8-byte hash), α=0.92. I round -// this down to make the math easier on the CPU and avoid its FPU. -// Since on average we start the probing in the middle of a cache line, this -// strategy pulls in two cache lines of hashes on every lookup. I think that's -// pretty good, but if you want to trade off some space, it could go down to one -// cache line on average with an α of 0.84. -// -// > Wait, what? Where did you get 1-α^k from? -// -// On the first probe, your odds of a collision with an existing element is α. -// The odds of doing this twice in a row is approximately α^2. For three times, -// α^3, etc. Therefore, the odds of colliding k times is α^k. The odds of NOT -// colliding after k tries is 1-α^k. -// -// The paper from 1986 cited below mentions an implementation which keeps track -// of the distance-to-initial-bucket histogram. This approach is not suitable -// for modern architectures because it requires maintaining an internal data -// structure. This allows very good first guesses, but we are most concerned -// with guessing entire cache lines, not individual indexes. Furthermore, array -// accesses are no longer linear and in one direction, as we have now. There -// is also memory and cache pressure that this would entail that would be very -// difficult to properly see in a microbenchmark. -// -// ## Future Improvements (FIXME!) -// -// Allow the load factor to be changed dynamically and/or at initialization. -// -// Also, would it be possible for us to reuse storage when growing the -// underlying table? This is exactly the use case for 'realloc', and may -// be worth exploring. -// -// ## Future Optimizations (FIXME!) -// -// Another possible design choice that I made without any real reason is -// parameterizing the raw table over keys and values. Technically, all we need -// is the size and alignment of keys and values, and the code should be just as -// efficient (well, we might need one for power-of-two size and one for not...). -// This has the potential to reduce code bloat in rust executables, without -// really losing anything except 4 words (key size, key alignment, val size, -// val alignment) which can be passed in to every call of a `RawTable` function. -// This would definitely be an avenue worth exploring if people start complaining -// about the size of rust executables. -// -// Annotate exceedingly likely branches in `table::make_hash` -// and `search_hashed` to reduce instruction cache pressure -// and mispredictions once it becomes possible (blocked on issue #11092). -// -// Shrinking the table could simply reallocate in place after moving buckets -// to the first half. -// -// The growth algorithm (fragment of the Proof of Correctness) -// -------------------- -// -// The growth algorithm is basically a fast path of the naive reinsertion- -// during-resize algorithm. Other paths should never be taken. -// -// Consider growing a robin hood hashtable of capacity n. Normally, we do this -// by allocating a new table of capacity `2n`, and then individually reinsert -// each element in the old table into the new one. This guarantees that the -// new table is a valid robin hood hashtable with all the desired statistical -// properties. Remark that the order we reinsert the elements in should not -// matter. For simplicity and efficiency, we will consider only linear -// reinsertions, which consist of reinserting all elements in the old table -// into the new one by increasing order of index. However we will not be -// starting our reinsertions from index 0 in general. If we start from index -// i, for the purpose of reinsertion we will consider all elements with real -// index j < i to have virtual index n + j. -// -// Our hash generation scheme consists of generating a 64-bit hash and -// truncating the most significant bits. When moving to the new table, we -// simply introduce a new bit to the front of the hash. Therefore, if an -// elements has ideal index i in the old table, it can have one of two ideal -// locations in the new table. If the new bit is 0, then the new ideal index -// is i. If the new bit is 1, then the new ideal index is n + i. Intuitively, -// we are producing two independent tables of size n, and for each element we -// independently choose which table to insert it into with equal probability. -// However the rather than wrapping around themselves on overflowing their -// indexes, the first table overflows into the first, and the first into the -// second. Visually, our new table will look something like: -// -// [yy_xxx_xxxx_xxx|xx_yyy_yyyy_yyy] -// -// Where x's are elements inserted into the first table, y's are elements -// inserted into the second, and _'s are empty sections. We now define a few -// key concepts that we will use later. Note that this is a very abstract -// perspective of the table. A real resized table would be at least half -// empty. -// -// Theorem: A linear robin hood reinsertion from the first ideal element -// produces identical results to a linear naive reinsertion from the same -// element. -// -// FIXME(Gankro, pczarn): review the proof and put it all in a separate doc.rs - -/// A hash map implementation which uses linear probing with Robin -/// Hood bucket stealing. -/// -/// The hashes are all keyed by the task-local random number generator -/// on creation by default. This means that the ordering of the keys is -/// randomized, but makes the tables more resistant to -/// denial-of-service attacks (Hash DoS). This behaviour can be -/// overridden with one of the constructors. -/// -/// It is required that the keys implement the `Eq` and `Hash` traits, although -/// this can frequently be achieved by using `#[derive(Eq, Hash)]`. -/// -/// Relevant papers/articles: -/// -/// 1. Pedro Celis. ["Robin Hood Hashing"](https://cs.uwaterloo.ca/research/tr/1986/CS-86-14.pdf) -/// 2. Emmanuel Goossaert. ["Robin Hood -/// hashing"](http://codecapsule.com/2013/11/11/robin-hood-hashing/) -/// 3. Emmanuel Goossaert. ["Robin Hood hashing: backward shift -/// deletion"](http://codecapsule.com/2013/11/17/robin-hood-hashing-backward-shift-deletion/) -/// -/// # Example -/// -/// ``` -/// use std::collections::HashMap; -/// -/// // type inference lets us omit an explicit type signature (which -/// // would be `HashMap<&str, &str>` in this example). -/// let mut book_reviews = HashMap::new(); -/// -/// // review some books. -/// book_reviews.insert("Adventures of Huckleberry Finn", "My favorite book."); -/// book_reviews.insert("Grimms' Fairy Tales", "Masterpiece."); -/// book_reviews.insert("Pride and Prejudice", "Very enjoyable."); -/// book_reviews.insert("The Adventures of Sherlock Holmes", "Eye lyked it alot."); -/// -/// // check for a specific one. -/// if !book_reviews.contains_key(&("Les Misérables")) { -/// println!("We've got {} reviews, but Les Misérables ain't one.", -/// book_reviews.len()); -/// } -/// -/// // oops, this review has a lot of spelling mistakes, let's delete it. -/// book_reviews.remove(&("The Adventures of Sherlock Holmes")); -/// -/// // look up the values associated with some keys. -/// let to_find = ["Pride and Prejudice", "Alice's Adventure in Wonderland"]; -/// for book in to_find.iter() { -/// match book_reviews.get(book) { -/// Some(review) => println!("{}: {}", *book, *review), -/// None => println!("{} is unreviewed.", *book) -/// } -/// } -/// -/// // iterate over everything. -/// for (book, review) in book_reviews.iter() { -/// println!("{}: \"{}\"", *book, *review); -/// } -/// ``` -/// -/// The easiest way to use `HashMap` with a custom type as key is to derive `Eq` and `Hash`. -/// We must also derive `PartialEq`. -/// -/// ``` -/// use std::collections::HashMap; -/// -/// #[derive(Hash, Eq, PartialEq, Debug)] -/// struct Viking { -/// name: String, -/// country: String, -/// } -/// -/// impl Viking { -/// /// Create a new Viking. -/// fn new(name: &str, country: &str) -> Viking { -/// Viking { name: name.to_string(), country: country.to_string() } -/// } -/// } -/// -/// // Use a HashMap to store the vikings' health points. -/// let mut vikings = HashMap::new(); -/// -/// vikings.insert(Viking::new("Einar", "Norway"), 25); -/// vikings.insert(Viking::new("Olaf", "Denmark"), 24); -/// vikings.insert(Viking::new("Harald", "Iceland"), 12); -/// -/// // Use derived implementation to print the status of the vikings. -/// for (viking, health) in vikings.iter() { -/// println!("{:?} has {} hp", viking, health); -/// } -/// ``` -#[derive(Clone)] -#[stable(feature = "rust1", since = "1.0.0")] -pub struct HashMap { - // All hashes are keyed on these values, to prevent hash collision attacks. - hash_state: S, - - table: RawTable, - - resize_policy: DefaultResizePolicy, -} - -/// Search for a pre-hashed key. -fn search_hashed(table: M, - hash: SafeHash, - mut is_match: F) - -> SearchResult where - M: Deref>, - F: FnMut(&K) -> bool, -{ - let size = table.size(); - let mut probe = Bucket::new(table, hash); - let ib = probe.index(); - - while probe.index() != ib + size { - let full = match probe.peek() { - Empty(b) => return TableRef(b.into_table()), // hit an empty bucket - Full(b) => b - }; - - if full.distance() + ib < full.index() { - // We can finish the search early if we hit any bucket - // with a lower distance to initial bucket than we've probed. - return TableRef(full.into_table()); - } - - // If the hash doesn't match, it can't be this one.. - if hash == full.hash() { - // If the key doesn't match, it can't be this one.. - if is_match(full.read().0) { - return FoundExisting(full); - } - } - - probe = full.next(); - } - - TableRef(probe.into_table()) -} - -fn pop_internal(starting_bucket: FullBucketMut) -> (K, V) { - let (empty, retkey, retval) = starting_bucket.take(); - let mut gap = match empty.gap_peek() { - Some(b) => b, - None => return (retkey, retval) - }; - - while gap.full().distance() != 0 { - gap = match gap.shift() { - Some(b) => b, - None => break - }; - } - - // Now we've done all our shifting. Return the value we grabbed earlier. - (retkey, retval) -} - -/// Perform robin hood bucket stealing at the given `bucket`. You must -/// also pass the position of that bucket's initial bucket so we don't have -/// to recalculate it. -/// -/// `hash`, `k`, and `v` are the elements to "robin hood" into the hashtable. -fn robin_hood<'a, K: 'a, V: 'a>(mut bucket: FullBucketMut<'a, K, V>, - mut ib: usize, - mut hash: SafeHash, - mut k: K, - mut v: V) - -> &'a mut V { - let starting_index = bucket.index(); - let size = { - let table = bucket.table(); // FIXME "lifetime too short". - table.size() - }; - // There can be at most `size - dib` buckets to displace, because - // in the worst case, there are `size` elements and we already are - // `distance` buckets away from the initial one. - let idx_end = starting_index + size - bucket.distance(); - - loop { - let (old_hash, old_key, old_val) = bucket.replace(hash, k, v); - loop { - let probe = bucket.next(); - assert!(probe.index() != idx_end); - - let full_bucket = match probe.peek() { - Empty(bucket) => { - // Found a hole! - let b = bucket.put(old_hash, old_key, old_val); - // Now that it's stolen, just read the value's pointer - // right out of the table! - return Bucket::at_index(b.into_table(), starting_index) - .peek() - .expect_full() - .into_mut_refs() - .1; - }, - Full(bucket) => bucket - }; - - let probe_ib = full_bucket.index() - full_bucket.distance(); - - bucket = full_bucket; - - // Robin hood! Steal the spot. - if ib < probe_ib { - ib = probe_ib; - hash = old_hash; - k = old_key; - v = old_val; - break; - } - } - } -} - -/// A result that works like Option> but preserves -/// the reference that grants us access to the table in any case. -enum SearchResult { - // This is an entry that holds the given key: - FoundExisting(FullBucket), - - // There was no such entry. The reference is given back: - TableRef(M) -} - -impl SearchResult { - fn into_option(self) -> Option> { - match self { - FoundExisting(bucket) => Some(bucket), - TableRef(_) => None - } - } -} - -impl HashMap - where K: Eq + Hash, - S: HashState, - H: hash::Hasher -{ - fn make_hash(&self, x: &X) -> SafeHash where X: Hash { - table::make_hash(&self.hash_state, x) - } - - /// Search for a key, yielding the index if it's found in the hashtable. - /// If you already have the hash for the key lying around, use - /// search_hashed. - fn search<'a, Q: ?Sized>(&'a self, q: &Q) -> Option> - where K: Borrow, Q: Eq + Hash - { - let hash = self.make_hash(q); - search_hashed(&self.table, hash, |k| q.eq(k.borrow())) - .into_option() - } - - fn search_mut<'a, Q: ?Sized>(&'a mut self, q: &Q) -> Option> - where K: Borrow, Q: Eq + Hash - { - let hash = self.make_hash(q); - search_hashed(&mut self.table, hash, |k| q.eq(k.borrow())) - .into_option() - } - - // The caller should ensure that invariants by Robin Hood Hashing hold. - fn insert_hashed_ordered(&mut self, hash: SafeHash, k: K, v: V) { - let cap = self.table.capacity(); - let mut buckets = Bucket::new(&mut self.table, hash); - let ib = buckets.index(); - - while buckets.index() != ib + cap { - // We don't need to compare hashes for value swap. - // Not even DIBs for Robin Hood. - buckets = match buckets.peek() { - Empty(empty) => { - empty.put(hash, k, v); - return; - } - Full(b) => b.into_bucket() - }; - buckets.next(); - } - panic!("Internal HashMap error: Out of space."); - } -} - -impl + Eq, V> HashMap { - /// Create an empty HashMap. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashMap; - /// let mut map: HashMap<&str, int> = HashMap::new(); - /// ``` - #[inline] - #[stable(feature = "rust1", since = "1.0.0")] - pub fn new() -> HashMap { - Default::default() - } - - /// Creates an empty hash map with the given initial capacity. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashMap; - /// let mut map: HashMap<&str, int> = HashMap::with_capacity(10); - /// ``` - #[inline] - #[stable(feature = "rust1", since = "1.0.0")] - pub fn with_capacity(capacity: usize) -> HashMap { - HashMap::with_capacity_and_hash_state(capacity, Default::default()) - } -} - -impl HashMap - where K: Eq + Hash, - S: HashState, - H: hash::Hasher -{ - /// Creates an empty hashmap which will use the given hasher to hash keys. - /// - /// The creates map has the default initial capacity. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashMap; - /// use std::collections::hash_map::RandomState; - /// - /// let s = RandomState::new(); - /// let mut map = HashMap::with_hash_state(s); - /// map.insert(1, 2); - /// ``` - #[inline] - #[unstable(feature = "std_misc", reason = "hasher stuff is unclear")] - pub fn with_hash_state(hash_state: S) -> HashMap { - HashMap { - hash_state: hash_state, - resize_policy: DefaultResizePolicy::new(), - table: RawTable::new(0), - } - } - - /// Create an empty HashMap with space for at least `capacity` - /// elements, using `hasher` to hash the keys. - /// - /// Warning: `hasher` is normally randomly generated, and - /// is designed to allow HashMaps to be resistant to attacks that - /// cause many collisions and very poor performance. Setting it - /// manually using this function can expose a DoS attack vector. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashMap; - /// use std::collections::hash_map::RandomState; - /// - /// let s = RandomState::new(); - /// let mut map = HashMap::with_capacity_and_hash_state(10, s); - /// map.insert(1, 2); - /// ``` - #[inline] - #[unstable(feature = "std_misc", reason = "hasher stuff is unclear")] - pub fn with_capacity_and_hash_state(capacity: usize, hash_state: S) - -> HashMap { - let resize_policy = DefaultResizePolicy::new(); - let min_cap = max(INITIAL_CAPACITY, resize_policy.min_capacity(capacity)); - let internal_cap = min_cap.checked_next_power_of_two().expect("capacity overflow"); - assert!(internal_cap >= capacity, "capacity overflow"); - HashMap { - hash_state: hash_state, - resize_policy: resize_policy, - table: RawTable::new(internal_cap), - } - } - - /// Returns the number of elements the map can hold without reallocating. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashMap; - /// let map: HashMap = HashMap::with_capacity(100); - /// assert!(map.capacity() >= 100); - /// ``` - #[inline] - #[stable(feature = "rust1", since = "1.0.0")] - pub fn capacity(&self) -> usize { - self.resize_policy.usable_capacity(self.table.capacity()) - } - - /// Reserves capacity for at least `additional` more elements to be inserted - /// in the `HashMap`. The collection may reserve more space to avoid - /// frequent reallocations. - /// - /// # Panics - /// - /// Panics if the new allocation size overflows `usize`. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashMap; - /// let mut map: HashMap<&str, int> = HashMap::new(); - /// map.reserve(10); - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn reserve(&mut self, additional: usize) { - let new_size = self.len().checked_add(additional).expect("capacity overflow"); - let min_cap = self.resize_policy.min_capacity(new_size); - - // An invalid value shouldn't make us run out of space. This includes - // an overflow check. - assert!(new_size <= min_cap); - - if self.table.capacity() < min_cap { - let new_capacity = max(min_cap.next_power_of_two(), INITIAL_CAPACITY); - self.resize(new_capacity); - } - } - - /// Resizes the internal vectors to a new capacity. It's your responsibility to: - /// 1) Make sure the new capacity is enough for all the elements, accounting - /// for the load factor. - /// 2) Ensure new_capacity is a power of two or zero. - fn resize(&mut self, new_capacity: usize) { - assert!(self.table.size() <= new_capacity); - assert!(new_capacity.is_power_of_two() || new_capacity == 0); - - let mut old_table = replace(&mut self.table, RawTable::new(new_capacity)); - let old_size = old_table.size(); - - if old_table.capacity() == 0 || old_table.size() == 0 { - return; - } - - // Grow the table. - // Specialization of the other branch. - let mut bucket = Bucket::first(&mut old_table); - - // "So a few of the first shall be last: for many be called, - // but few chosen." - // - // We'll most likely encounter a few buckets at the beginning that - // have their initial buckets near the end of the table. They were - // placed at the beginning as the probe wrapped around the table - // during insertion. We must skip forward to a bucket that won't - // get reinserted too early and won't unfairly steal others spot. - // This eliminates the need for robin hood. - loop { - bucket = match bucket.peek() { - Full(full) => { - if full.distance() == 0 { - // This bucket occupies its ideal spot. - // It indicates the start of another "cluster". - bucket = full.into_bucket(); - break; - } - // Leaving this bucket in the last cluster for later. - full.into_bucket() - } - Empty(b) => { - // Encountered a hole between clusters. - b.into_bucket() - } - }; - bucket.next(); - } - - // This is how the buckets might be laid out in memory: - // ($ marks an initialized bucket) - // ________________ - // |$$$_$$$$$$_$$$$$| - // - // But we've skipped the entire initial cluster of buckets - // and will continue iteration in this order: - // ________________ - // |$$$$$$_$$$$$ - // ^ wrap around once end is reached - // ________________ - // $$$_____________| - // ^ exit once table.size == 0 - loop { - bucket = match bucket.peek() { - Full(bucket) => { - let h = bucket.hash(); - let (b, k, v) = bucket.take(); - self.insert_hashed_ordered(h, k, v); - { - let t = b.table(); // FIXME "lifetime too short". - if t.size() == 0 { break } - }; - b.into_bucket() - } - Empty(b) => b.into_bucket() - }; - bucket.next(); - } - - assert_eq!(self.table.size(), old_size); - } - - /// Shrinks the capacity of the map as much as possible. It will drop - /// down as much as possible while maintaining the internal rules - /// and possibly leaving some space in accordance with the resize policy. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashMap; - /// - /// let mut map: HashMap = HashMap::with_capacity(100); - /// map.insert(1, 2); - /// map.insert(3, 4); - /// assert!(map.capacity() >= 100); - /// map.shrink_to_fit(); - /// assert!(map.capacity() >= 2); - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn shrink_to_fit(&mut self) { - let min_capacity = self.resize_policy.min_capacity(self.len()); - let min_capacity = max(min_capacity.next_power_of_two(), INITIAL_CAPACITY); - - // An invalid value shouldn't make us run out of space. - debug_assert!(self.len() <= min_capacity); - - if self.table.capacity() != min_capacity { - let old_table = replace(&mut self.table, RawTable::new(min_capacity)); - let old_size = old_table.size(); - - // Shrink the table. Naive algorithm for resizing: - for (h, k, v) in old_table.into_iter() { - self.insert_hashed_nocheck(h, k, v); - } - - debug_assert_eq!(self.table.size(), old_size); - } - } - - /// Insert a pre-hashed key-value pair, without first checking - /// that there's enough room in the buckets. Returns a reference to the - /// newly insert value. - /// - /// If the key already exists, the hashtable will be returned untouched - /// and a reference to the existing element will be returned. - fn insert_hashed_nocheck(&mut self, hash: SafeHash, k: K, v: V) -> &mut V { - self.insert_or_replace_with(hash, k, v, |_, _, _| ()) - } - - fn insert_or_replace_with<'a, F>(&'a mut self, - hash: SafeHash, - k: K, - v: V, - mut found_existing: F) - -> &'a mut V where - F: FnMut(&mut K, &mut V, V), - { - // Worst case, we'll find one empty bucket among `size + 1` buckets. - let size = self.table.size(); - let mut probe = Bucket::new(&mut self.table, hash); - let ib = probe.index(); - - loop { - let mut bucket = match probe.peek() { - Empty(bucket) => { - // Found a hole! - return bucket.put(hash, k, v).into_mut_refs().1; - } - Full(bucket) => bucket - }; - - // hash matches? - if bucket.hash() == hash { - // key matches? - if k == *bucket.read_mut().0 { - let (bucket_k, bucket_v) = bucket.into_mut_refs(); - debug_assert!(k == *bucket_k); - // Key already exists. Get its reference. - found_existing(bucket_k, bucket_v, v); - return bucket_v; - } - } - - let robin_ib = bucket.index() as int - bucket.distance() as int; - - if (ib as int) < robin_ib { - // Found a luckier bucket than me. Better steal his spot. - return robin_hood(bucket, robin_ib as usize, hash, k, v); - } - - probe = bucket.next(); - assert!(probe.index() != ib + size + 1); - } - } - - /// An iterator visiting all keys in arbitrary order. - /// Iterator element type is `&'a K`. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashMap; - /// - /// let mut map = HashMap::new(); - /// map.insert("a", 1); - /// map.insert("b", 2); - /// map.insert("c", 3); - /// - /// for key in map.keys() { - /// println!("{}", key); - /// } - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn keys<'a>(&'a self) -> Keys<'a, K, V> { - fn first((a, _): (A, B)) -> A { a } - let first: fn((&'a K,&'a V)) -> &'a K = first; // coerce to fn ptr - - Keys { inner: self.iter().map(first) } - } - - /// An iterator visiting all values in arbitrary order. - /// Iterator element type is `&'a V`. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashMap; - /// - /// let mut map = HashMap::new(); - /// map.insert("a", 1); - /// map.insert("b", 2); - /// map.insert("c", 3); - /// - /// for val in map.values() { - /// println!("{}", val); - /// } - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn values<'a>(&'a self) -> Values<'a, K, V> { - fn second((_, b): (A, B)) -> B { b } - let second: fn((&'a K,&'a V)) -> &'a V = second; // coerce to fn ptr - - Values { inner: self.iter().map(second) } - } - - /// An iterator visiting all key-value pairs in arbitrary order. - /// Iterator element type is `(&'a K, &'a V)`. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashMap; - /// - /// let mut map = HashMap::new(); - /// map.insert("a", 1); - /// map.insert("b", 2); - /// map.insert("c", 3); - /// - /// for (key, val) in map.iter() { - /// println!("key: {} val: {}", key, val); - /// } - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn iter(&self) -> Iter { - Iter { inner: self.table.iter() } - } - - /// An iterator visiting all key-value pairs in arbitrary order, - /// with mutable references to the values. - /// Iterator element type is `(&'a K, &'a mut V)`. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashMap; - /// - /// let mut map = HashMap::new(); - /// map.insert("a", 1); - /// map.insert("b", 2); - /// map.insert("c", 3); - /// - /// // Update all values - /// for (_, val) in map.iter_mut() { - /// *val *= 2; - /// } - /// - /// for (key, val) in map.iter() { - /// println!("key: {} val: {}", key, val); - /// } - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn iter_mut(&mut self) -> IterMut { - IterMut { inner: self.table.iter_mut() } - } - - /// Creates a consuming iterator, that is, one that moves each key-value - /// pair out of the map in arbitrary order. The map cannot be used after - /// calling this. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashMap; - /// - /// let mut map = HashMap::new(); - /// map.insert("a", 1); - /// map.insert("b", 2); - /// map.insert("c", 3); - /// - /// // Not possible with .iter() - /// let vec: Vec<(&str, int)> = map.into_iter().collect(); - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn into_iter(self) -> IntoIter { - fn last_two((_, b, c): (A, B, C)) -> (B, C) { (b, c) } - let last_two: fn((SafeHash, K, V)) -> (K, V) = last_two; - - IntoIter { - inner: self.table.into_iter().map(last_two) - } - } - - /// Gets the given key's corresponding entry in the map for in-place manipulation. - #[stable(feature = "rust1", since = "1.0.0")] - pub fn entry(&mut self, key: K) -> Entry { - // Gotta resize now. - self.reserve(1); - - let hash = self.make_hash(&key); - search_entry_hashed(&mut self.table, hash, key) - } - - /// Returns the number of elements in the map. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashMap; - /// - /// let mut a = HashMap::new(); - /// assert_eq!(a.len(), 0); - /// a.insert(1, "a"); - /// assert_eq!(a.len(), 1); - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn len(&self) -> usize { self.table.size() } - - /// Returns true if the map contains no elements. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashMap; - /// - /// let mut a = HashMap::new(); - /// assert!(a.is_empty()); - /// a.insert(1, "a"); - /// assert!(!a.is_empty()); - /// ``` - #[inline] - #[stable(feature = "rust1", since = "1.0.0")] - pub fn is_empty(&self) -> bool { self.len() == 0 } - - /// Clears the map, returning all key-value pairs as an iterator. Keeps the - /// allocated memory for reuse. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashMap; - /// - /// let mut a = HashMap::new(); - /// a.insert(1, "a"); - /// a.insert(2, "b"); - /// - /// for (k, v) in a.drain().take(1) { - /// assert!(k == 1 || k == 2); - /// assert!(v == "a" || v == "b"); - /// } - /// - /// assert!(a.is_empty()); - /// ``` - #[inline] - #[unstable(feature = "std_misc", - reason = "matches collection reform specification, waiting for dust to settle")] - pub fn drain(&mut self) -> Drain { - fn last_two((_, b, c): (A, B, C)) -> (B, C) { (b, c) } - let last_two: fn((SafeHash, K, V)) -> (K, V) = last_two; // coerce to fn pointer - - Drain { - inner: self.table.drain().map(last_two), - } - } - - /// Clears the map, removing all key-value pairs. Keeps the allocated memory - /// for reuse. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashMap; - /// - /// let mut a = HashMap::new(); - /// a.insert(1, "a"); - /// a.clear(); - /// assert!(a.is_empty()); - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - #[inline] - pub fn clear(&mut self) { - self.drain(); - } - - /// Returns a reference to the value corresponding to the key. - /// - /// The key may be any borrowed form of the map's key type, but - /// `Hash` and `Eq` on the borrowed form *must* match those for - /// the key type. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashMap; - /// - /// let mut map = HashMap::new(); - /// map.insert(1, "a"); - /// assert_eq!(map.get(&1), Some(&"a")); - /// assert_eq!(map.get(&2), None); - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn get(&self, k: &Q) -> Option<&V> - where K: Borrow, Q: Hash + Eq - { - self.search(k).map(|bucket| bucket.into_refs().1) - } - - /// Returns true if the map contains a value for the specified key. - /// - /// The key may be any borrowed form of the map's key type, but - /// `Hash` and `Eq` on the borrowed form *must* match those for - /// the key type. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashMap; - /// - /// let mut map = HashMap::new(); - /// map.insert(1, "a"); - /// assert_eq!(map.contains_key(&1), true); - /// assert_eq!(map.contains_key(&2), false); - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn contains_key(&self, k: &Q) -> bool - where K: Borrow, Q: Hash + Eq - { - self.search(k).is_some() - } - - /// Returns a mutable reference to the value corresponding to the key. - /// - /// The key may be any borrowed form of the map's key type, but - /// `Hash` and `Eq` on the borrowed form *must* match those for - /// the key type. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashMap; - /// - /// let mut map = HashMap::new(); - /// map.insert(1, "a"); - /// match map.get_mut(&1) { - /// Some(x) => *x = "b", - /// None => (), - /// } - /// assert_eq!(map[1], "b"); - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn get_mut(&mut self, k: &Q) -> Option<&mut V> - where K: Borrow, Q: Hash + Eq - { - self.search_mut(k).map(|bucket| bucket.into_mut_refs().1) - } - - /// Inserts a key-value pair from the map. If the key already had a value - /// present in the map, that value is returned. Otherwise, `None` is returned. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashMap; - /// - /// let mut map = HashMap::new(); - /// assert_eq!(map.insert(37, "a"), None); - /// assert_eq!(map.is_empty(), false); - /// - /// map.insert(37, "b"); - /// assert_eq!(map.insert(37, "c"), Some("b")); - /// assert_eq!(map[37], "c"); - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn insert(&mut self, k: K, v: V) -> Option { - let hash = self.make_hash(&k); - self.reserve(1); - - let mut retval = None; - self.insert_or_replace_with(hash, k, v, |_, val_ref, val| { - retval = Some(replace(val_ref, val)); - }); - retval - } - - /// Removes a key from the map, returning the value at the key if the key - /// was previously in the map. - /// - /// The key may be any borrowed form of the map's key type, but - /// `Hash` and `Eq` on the borrowed form *must* match those for - /// the key type. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashMap; - /// - /// let mut map = HashMap::new(); - /// map.insert(1, "a"); - /// assert_eq!(map.remove(&1), Some("a")); - /// assert_eq!(map.remove(&1), None); - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn remove(&mut self, k: &Q) -> Option - where K: Borrow, Q: Hash + Eq - { - if self.table.size() == 0 { - return None - } - - self.search_mut(k).map(|bucket| pop_internal(bucket).1) - } -} - -fn search_entry_hashed<'a, K: Eq, V>(table: &'a mut RawTable, hash: SafeHash, k: K) - -> Entry<'a, K, V> -{ - // Worst case, we'll find one empty bucket among `size + 1` buckets. - let size = table.size(); - let mut probe = Bucket::new(table, hash); - let ib = probe.index(); - - loop { - let bucket = match probe.peek() { - Empty(bucket) => { - // Found a hole! - return Vacant(VacantEntry { - hash: hash, - key: k, - elem: NoElem(bucket), - }); - }, - Full(bucket) => bucket - }; - - // hash matches? - if bucket.hash() == hash { - // key matches? - if k == *bucket.read().0 { - return Occupied(OccupiedEntry{ - elem: bucket, - }); - } - } - - let robin_ib = bucket.index() as int - bucket.distance() as int; - - if (ib as int) < robin_ib { - // Found a luckier bucket than me. Better steal his spot. - return Vacant(VacantEntry { - hash: hash, - key: k, - elem: NeqElem(bucket, robin_ib as usize), - }); - } - - probe = bucket.next(); - assert!(probe.index() != ib + size + 1); - } -} - -impl PartialEq for HashMap - where K: Eq + Hash, V: PartialEq, - S: HashState, - H: hash::Hasher -{ - fn eq(&self, other: &HashMap) -> bool { - if self.len() != other.len() { return false; } - - self.iter().all(|(key, value)| - other.get(key).map_or(false, |v| *value == *v) - ) - } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl Eq for HashMap - where K: Eq + Hash, V: Eq, - S: HashState, - H: hash::Hasher -{} - -#[stable(feature = "rust1", since = "1.0.0")] -impl Debug for HashMap - where K: Eq + Hash + Debug, V: Debug, - S: HashState, - H: hash::Hasher -{ - fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { - try!(write!(f, "HashMap {{")); - - for (i, (k, v)) in self.iter().enumerate() { - if i != 0 { try!(write!(f, ", ")); } - try!(write!(f, "{:?}: {:?}", *k, *v)); - } - - write!(f, "}}") - } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl Default for HashMap - where K: Eq + Hash, - S: HashState + Default, - H: hash::Hasher -{ - fn default() -> HashMap { - HashMap::with_hash_state(Default::default()) - } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl Index for HashMap - where K: Eq + Hash + Borrow, - Q: Eq + Hash, - S: HashState, - H: hash::Hasher -{ - type Output = V; - - #[inline] - fn index<'a>(&'a self, index: &Q) -> &'a V { - self.get(index).expect("no entry found for key") - } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl IndexMut for HashMap - where K: Eq + Hash + Borrow, - Q: Eq + Hash, - S: HashState, - H: hash::Hasher -{ - #[inline] - fn index_mut<'a>(&'a mut self, index: &Q) -> &'a mut V { - self.get_mut(index).expect("no entry found for key") - } -} - -/// HashMap iterator. -#[stable(feature = "rust1", since = "1.0.0")] -pub struct Iter<'a, K: 'a, V: 'a> { - inner: table::Iter<'a, K, V> -} - -// FIXME(#19839) Remove in favor of `#[derive(Clone)]` -impl<'a, K, V> Clone for Iter<'a, K, V> { - fn clone(&self) -> Iter<'a, K, V> { - Iter { - inner: self.inner.clone() - } - } -} - -/// HashMap mutable values iterator. -#[stable(feature = "rust1", since = "1.0.0")] -pub struct IterMut<'a, K: 'a, V: 'a> { - inner: table::IterMut<'a, K, V> -} - -/// HashMap move iterator. -#[stable(feature = "rust1", since = "1.0.0")] -pub struct IntoIter { - inner: iter::Map, fn((SafeHash, K, V)) -> (K, V)> -} - -/// HashMap keys iterator. -#[stable(feature = "rust1", since = "1.0.0")] -pub struct Keys<'a, K: 'a, V: 'a> { - inner: Map, fn((&'a K, &'a V)) -> &'a K> -} - -// FIXME(#19839) Remove in favor of `#[derive(Clone)]` -impl<'a, K, V> Clone for Keys<'a, K, V> { - fn clone(&self) -> Keys<'a, K, V> { - Keys { - inner: self.inner.clone() - } - } -} - -/// HashMap values iterator. -#[stable(feature = "rust1", since = "1.0.0")] -pub struct Values<'a, K: 'a, V: 'a> { - inner: Map, fn((&'a K, &'a V)) -> &'a V> -} - -// FIXME(#19839) Remove in favor of `#[derive(Clone)]` -impl<'a, K, V> Clone for Values<'a, K, V> { - fn clone(&self) -> Values<'a, K, V> { - Values { - inner: self.inner.clone() - } - } -} - -/// HashMap drain iterator. -#[unstable(feature = "std_misc", - reason = "matches collection reform specification, waiting for dust to settle")] -pub struct Drain<'a, K: 'a, V: 'a> { - inner: iter::Map, fn((SafeHash, K, V)) -> (K, V)> -} - -/// A view into a single occupied location in a HashMap. -#[unstable(feature = "std_misc", - reason = "precise API still being fleshed out")] -pub struct OccupiedEntry<'a, K: 'a, V: 'a> { - elem: FullBucket>, -} - -/// A view into a single empty location in a HashMap. -#[unstable(feature = "std_misc", - reason = "precise API still being fleshed out")] -pub struct VacantEntry<'a, K: 'a, V: 'a> { - hash: SafeHash, - key: K, - elem: VacantEntryState>, -} - -/// A view into a single location in a map, which may be vacant or occupied. -#[unstable(feature = "std_misc", - reason = "precise API still being fleshed out")] -pub enum Entry<'a, K: 'a, V: 'a> { - /// An occupied Entry. - Occupied(OccupiedEntry<'a, K, V>), - /// A vacant Entry. - Vacant(VacantEntry<'a, K, V>), -} - -/// Possible states of a VacantEntry. -enum VacantEntryState { - /// The index is occupied, but the key to insert has precedence, - /// and will kick the current one out on insertion. - NeqElem(FullBucket, usize), - /// The index is genuinely vacant. - NoElem(EmptyBucket), -} - -impl<'a, K, V, S, H> IntoIterator for &'a HashMap - where K: Eq + Hash, - S: HashState, - H: hash::Hasher -{ - type Item = (&'a K, &'a V); - type IntoIter = Iter<'a, K, V>; - - fn into_iter(self) -> Iter<'a, K, V> { - self.iter() - } -} - -impl<'a, K, V, S, H> IntoIterator for &'a mut HashMap - where K: Eq + Hash, - S: HashState, - H: hash::Hasher -{ - type Item = (&'a K, &'a mut V); - type IntoIter = IterMut<'a, K, V>; - - fn into_iter(mut self) -> IterMut<'a, K, V> { - self.iter_mut() - } -} - -impl IntoIterator for HashMap - where K: Eq + Hash, - S: HashState, - H: hash::Hasher -{ - type Item = (K, V); - type IntoIter = IntoIter; - - fn into_iter(self) -> IntoIter { - self.into_iter() - } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, K, V> Iterator for Iter<'a, K, V> { - type Item = (&'a K, &'a V); - - #[inline] fn next(&mut self) -> Option<(&'a K, &'a V)> { self.inner.next() } - #[inline] fn size_hint(&self) -> (usize, Option) { self.inner.size_hint() } -} -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, K, V> ExactSizeIterator for Iter<'a, K, V> { - #[inline] fn len(&self) -> usize { self.inner.len() } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, K, V> Iterator for IterMut<'a, K, V> { - type Item = (&'a K, &'a mut V); - - #[inline] fn next(&mut self) -> Option<(&'a K, &'a mut V)> { self.inner.next() } - #[inline] fn size_hint(&self) -> (usize, Option) { self.inner.size_hint() } -} -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, K, V> ExactSizeIterator for IterMut<'a, K, V> { - #[inline] fn len(&self) -> usize { self.inner.len() } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl Iterator for IntoIter { - type Item = (K, V); - - #[inline] fn next(&mut self) -> Option<(K, V)> { self.inner.next() } - #[inline] fn size_hint(&self) -> (usize, Option) { self.inner.size_hint() } -} -#[stable(feature = "rust1", since = "1.0.0")] -impl ExactSizeIterator for IntoIter { - #[inline] fn len(&self) -> usize { self.inner.len() } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, K, V> Iterator for Keys<'a, K, V> { - type Item = &'a K; - - #[inline] fn next(&mut self) -> Option<(&'a K)> { self.inner.next() } - #[inline] fn size_hint(&self) -> (usize, Option) { self.inner.size_hint() } -} -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, K, V> ExactSizeIterator for Keys<'a, K, V> { - #[inline] fn len(&self) -> usize { self.inner.len() } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, K, V> Iterator for Values<'a, K, V> { - type Item = &'a V; - - #[inline] fn next(&mut self) -> Option<(&'a V)> { self.inner.next() } - #[inline] fn size_hint(&self) -> (usize, Option) { self.inner.size_hint() } -} -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, K, V> ExactSizeIterator for Values<'a, K, V> { - #[inline] fn len(&self) -> usize { self.inner.len() } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, K, V> Iterator for Drain<'a, K, V> { - type Item = (K, V); - - #[inline] fn next(&mut self) -> Option<(K, V)> { self.inner.next() } - #[inline] fn size_hint(&self) -> (usize, Option) { self.inner.size_hint() } -} -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, K, V> ExactSizeIterator for Drain<'a, K, V> { - #[inline] fn len(&self) -> usize { self.inner.len() } -} - -#[unstable(feature = "std_misc", - reason = "matches collection reform v2 specification, waiting for dust to settle")] -impl<'a, K, V> Entry<'a, K, V> { - /// Returns a mutable reference to the entry if occupied, or the VacantEntry if vacant. - pub fn get(self) -> Result<&'a mut V, VacantEntry<'a, K, V>> { - match self { - Occupied(entry) => Ok(entry.into_mut()), - Vacant(entry) => Err(entry), - } - } -} - -impl<'a, K, V> OccupiedEntry<'a, K, V> { - /// Gets a reference to the value in the entry. - #[stable(feature = "rust1", since = "1.0.0")] - pub fn get(&self) -> &V { - self.elem.read().1 - } - - /// Gets a mutable reference to the value in the entry. - #[stable(feature = "rust1", since = "1.0.0")] - pub fn get_mut(&mut self) -> &mut V { - self.elem.read_mut().1 - } - - /// Converts the OccupiedEntry into a mutable reference to the value in the entry - /// with a lifetime bound to the map itself - #[stable(feature = "rust1", since = "1.0.0")] - pub fn into_mut(self) -> &'a mut V { - self.elem.into_mut_refs().1 - } - - /// Sets the value of the entry, and returns the entry's old value - #[stable(feature = "rust1", since = "1.0.0")] - pub fn insert(&mut self, mut value: V) -> V { - let old_value = self.get_mut(); - mem::swap(&mut value, old_value); - value - } - - /// Takes the value out of the entry, and returns it - #[stable(feature = "rust1", since = "1.0.0")] - pub fn remove(self) -> V { - pop_internal(self.elem).1 - } -} - -impl<'a, K: 'a, V: 'a> VacantEntry<'a, K, V> { - /// Sets the value of the entry with the VacantEntry's key, - /// and returns a mutable reference to it - #[stable(feature = "rust1", since = "1.0.0")] - pub fn insert(self, value: V) -> &'a mut V { - match self.elem { - NeqElem(bucket, ib) => { - robin_hood(bucket, ib, self.hash, self.key, value) - } - NoElem(bucket) => { - bucket.put(self.hash, self.key, value).into_mut_refs().1 - } - } - } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl FromIterator<(K, V)> for HashMap - where K: Eq + Hash, - S: HashState + Default, - H: hash::Hasher -{ - fn from_iter>(iter: T) -> HashMap { - let iter = iter.into_iter(); - let lower = iter.size_hint().0; - let mut map = HashMap::with_capacity_and_hash_state(lower, - Default::default()); - map.extend(iter); - map - } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl Extend<(K, V)> for HashMap - where K: Eq + Hash, - S: HashState, - H: hash::Hasher -{ - fn extend>(&mut self, iter: T) { - for (k, v) in iter { - self.insert(k, v); - } - } -} - - -/// `RandomState` is the default state for `HashMap` types. -/// -/// A particular instance `RandomState` will create the same instances of -/// `Hasher`, but the hashers created by two different `RandomState` -/// instances are unlikely to produce the same result for the same values. -#[derive(Clone)] -#[unstable(feature = "std_misc", - reason = "hashing an hash maps may be altered")] -pub struct RandomState { - k0: u64, - k1: u64, -} - -#[unstable(feature = "std_misc", - reason = "hashing an hash maps may be altered")] -impl RandomState { - /// Construct a new `RandomState` that is initialized with random keys. - #[inline] - #[allow(deprecated)] - pub fn new() -> RandomState { - let mut r = rand::thread_rng(); - RandomState { k0: r.gen(), k1: r.gen() } - } -} - -#[unstable(feature = "std_misc", - reason = "hashing an hash maps may be altered")] -impl HashState for RandomState { - type Hasher = Hasher; - fn hasher(&self) -> Hasher { - Hasher { inner: SipHasher::new_with_keys(self.k0, self.k1) } - } -} - -#[unstable(feature = "std_misc", - reason = "hashing an hash maps may be altered")] -impl Default for RandomState { - #[inline] - fn default() -> RandomState { - RandomState::new() - } -} - -/// A hasher implementation which is generated from `RandomState` instances. -/// -/// This is the default hasher used in a `HashMap` to hash keys. Types do not -/// typically declare an ability to explicitly hash into this particular type, -/// but rather in a `H: hash::Writer` type parameter. -#[unstable(feature = "std_misc", - reason = "hashing an hash maps may be altered")] -pub struct Hasher { inner: SipHasher } - -impl hash::Writer for Hasher { - fn write(&mut self, data: &[u8]) { - hash::Writer::write(&mut self.inner, data) - } -} - -impl hash::Hasher for Hasher { - type Output = u64; - fn reset(&mut self) { hash::Hasher::reset(&mut self.inner) } - fn finish(&self) -> u64 { self.inner.finish() } -} - -#[cfg(test)] -mod test_map { - use prelude::v1::*; - - use super::HashMap; - use super::Entry::{Occupied, Vacant}; - use iter::{range_inclusive, range_step_inclusive, repeat}; - use cell::RefCell; - use rand::{weak_rng, Rng}; - - #[test] - fn test_create_capacity_zero() { - let mut m = HashMap::with_capacity(0); - - assert!(m.insert(1, 1).is_none()); - - assert!(m.contains_key(&1)); - assert!(!m.contains_key(&0)); - } - - #[test] - fn test_insert() { - let mut m = HashMap::new(); - assert_eq!(m.len(), 0); - assert!(m.insert(1, 2).is_none()); - assert_eq!(m.len(), 1); - assert!(m.insert(2, 4).is_none()); - assert_eq!(m.len(), 2); - assert_eq!(*m.get(&1).unwrap(), 2); - assert_eq!(*m.get(&2).unwrap(), 4); - } - - thread_local! { static DROP_VECTOR: RefCell> = RefCell::new(Vec::new()) } - - #[derive(Hash, PartialEq, Eq)] - struct Dropable { - k: usize - } - - impl Dropable { - fn new(k: usize) -> Dropable { - DROP_VECTOR.with(|slot| { - slot.borrow_mut()[k] += 1; - }); - - Dropable { k: k } - } - } - - impl Drop for Dropable { - fn drop(&mut self) { - DROP_VECTOR.with(|slot| { - slot.borrow_mut()[self.k] -= 1; - }); - } - } - - impl Clone for Dropable { - fn clone(&self) -> Dropable { - Dropable::new(self.k) - } - } - - #[test] - fn test_drops() { - DROP_VECTOR.with(|slot| { - *slot.borrow_mut() = repeat(0).take(200).collect(); - }); - - { - let mut m = HashMap::new(); - - DROP_VECTOR.with(|v| { - for i in 0..200 { - assert_eq!(v.borrow()[i], 0); - } - }); - - for i in 0..100 { - let d1 = Dropable::new(i); - let d2 = Dropable::new(i+100); - m.insert(d1, d2); - } - - DROP_VECTOR.with(|v| { - for i in 0..200 { - assert_eq!(v.borrow()[i], 1); - } - }); - - for i in 0..50 { - let k = Dropable::new(i); - let v = m.remove(&k); - - assert!(v.is_some()); - - DROP_VECTOR.with(|v| { - assert_eq!(v.borrow()[i], 1); - assert_eq!(v.borrow()[i+100], 1); - }); - } - - DROP_VECTOR.with(|v| { - for i in 0..50 { - assert_eq!(v.borrow()[i], 0); - assert_eq!(v.borrow()[i+100], 0); - } - - for i in 50..100 { - assert_eq!(v.borrow()[i], 1); - assert_eq!(v.borrow()[i+100], 1); - } - }); - } - - DROP_VECTOR.with(|v| { - for i in 0..200 { - assert_eq!(v.borrow()[i], 0); - } - }); - } - - #[test] - fn test_move_iter_drops() { - DROP_VECTOR.with(|v| { - *v.borrow_mut() = repeat(0).take(200).collect(); - }); - - let hm = { - let mut hm = HashMap::new(); - - DROP_VECTOR.with(|v| { - for i in 0..200 { - assert_eq!(v.borrow()[i], 0); - } - }); - - for i in 0..100 { - let d1 = Dropable::new(i); - let d2 = Dropable::new(i+100); - hm.insert(d1, d2); - } - - DROP_VECTOR.with(|v| { - for i in 0..200 { - assert_eq!(v.borrow()[i], 1); - } - }); - - hm - }; - - // By the way, ensure that cloning doesn't screw up the dropping. - drop(hm.clone()); - - { - let mut half = hm.into_iter().take(50); - - DROP_VECTOR.with(|v| { - for i in 0..200 { - assert_eq!(v.borrow()[i], 1); - } - }); - - for _ in half.by_ref() {} - - DROP_VECTOR.with(|v| { - let nk = (0..100).filter(|&i| { - v.borrow()[i] == 1 - }).count(); - - let nv = (0..100).filter(|&i| { - v.borrow()[i+100] == 1 - }).count(); - - assert_eq!(nk, 50); - assert_eq!(nv, 50); - }); - }; - - DROP_VECTOR.with(|v| { - for i in 0..200 { - assert_eq!(v.borrow()[i], 0); - } - }); - } - - #[test] - fn test_empty_pop() { - let mut m: HashMap = HashMap::new(); - assert_eq!(m.remove(&0), None); - } - - #[test] - fn test_lots_of_insertions() { - let mut m = HashMap::new(); - - // Try this a few times to make sure we never screw up the hashmap's - // internal state. - for _ in 0..10 { - assert!(m.is_empty()); - - for i in range_inclusive(1, 1000) { - assert!(m.insert(i, i).is_none()); - - for j in range_inclusive(1, i) { - let r = m.get(&j); - assert_eq!(r, Some(&j)); - } - - for j in range_inclusive(i+1, 1000) { - let r = m.get(&j); - assert_eq!(r, None); - } - } - - for i in range_inclusive(1001, 2000) { - assert!(!m.contains_key(&i)); - } - - // remove forwards - for i in range_inclusive(1, 1000) { - assert!(m.remove(&i).is_some()); - - for j in range_inclusive(1, i) { - assert!(!m.contains_key(&j)); - } - - for j in range_inclusive(i+1, 1000) { - assert!(m.contains_key(&j)); - } - } - - for i in range_inclusive(1, 1000) { - assert!(!m.contains_key(&i)); - } - - for i in range_inclusive(1, 1000) { - assert!(m.insert(i, i).is_none()); - } - - // remove backwards - for i in range_step_inclusive(1000, 1, -1) { - assert!(m.remove(&i).is_some()); - - for j in range_inclusive(i, 1000) { - assert!(!m.contains_key(&j)); - } - - for j in range_inclusive(1, i-1) { - assert!(m.contains_key(&j)); - } - } - } - } - - #[test] - fn test_find_mut() { - let mut m = HashMap::new(); - assert!(m.insert(1, 12).is_none()); - assert!(m.insert(2, 8).is_none()); - assert!(m.insert(5, 14).is_none()); - let new = 100; - match m.get_mut(&5) { - None => panic!(), Some(x) => *x = new - } - assert_eq!(m.get(&5), Some(&new)); - } - - #[test] - fn test_insert_overwrite() { - let mut m = HashMap::new(); - assert!(m.insert(1, 2).is_none()); - assert_eq!(*m.get(&1).unwrap(), 2); - assert!(!m.insert(1, 3).is_none()); - assert_eq!(*m.get(&1).unwrap(), 3); - } - - #[test] - fn test_insert_conflicts() { - let mut m = HashMap::with_capacity(4); - assert!(m.insert(1, 2).is_none()); - assert!(m.insert(5, 3).is_none()); - assert!(m.insert(9, 4).is_none()); - assert_eq!(*m.get(&9).unwrap(), 4); - assert_eq!(*m.get(&5).unwrap(), 3); - assert_eq!(*m.get(&1).unwrap(), 2); - } - - #[test] - fn test_conflict_remove() { - let mut m = HashMap::with_capacity(4); - assert!(m.insert(1, 2).is_none()); - assert_eq!(*m.get(&1).unwrap(), 2); - assert!(m.insert(5, 3).is_none()); - assert_eq!(*m.get(&1).unwrap(), 2); - assert_eq!(*m.get(&5).unwrap(), 3); - assert!(m.insert(9, 4).is_none()); - assert_eq!(*m.get(&1).unwrap(), 2); - assert_eq!(*m.get(&5).unwrap(), 3); - assert_eq!(*m.get(&9).unwrap(), 4); - assert!(m.remove(&1).is_some()); - assert_eq!(*m.get(&9).unwrap(), 4); - assert_eq!(*m.get(&5).unwrap(), 3); - } - - #[test] - fn test_is_empty() { - let mut m = HashMap::with_capacity(4); - assert!(m.insert(1, 2).is_none()); - assert!(!m.is_empty()); - assert!(m.remove(&1).is_some()); - assert!(m.is_empty()); - } - - #[test] - fn test_pop() { - let mut m = HashMap::new(); - m.insert(1, 2); - assert_eq!(m.remove(&1), Some(2)); - assert_eq!(m.remove(&1), None); - } - - #[test] - fn test_iterate() { - let mut m = HashMap::with_capacity(4); - for i in 0..32 { - assert!(m.insert(i, i*2).is_none()); - } - assert_eq!(m.len(), 32); - - let mut observed: u32 = 0; - - for (k, v) in &m { - assert_eq!(*v, *k * 2); - observed |= 1 << *k; - } - assert_eq!(observed, 0xFFFF_FFFF); - } - - #[test] - fn test_keys() { - let vec = vec![(1, 'a'), (2, 'b'), (3, 'c')]; - let map: HashMap<_, _> = vec.into_iter().collect(); - let keys: Vec<_> = map.keys().cloned().collect(); - assert_eq!(keys.len(), 3); - assert!(keys.contains(&1)); - assert!(keys.contains(&2)); - assert!(keys.contains(&3)); - } - - #[test] - fn test_values() { - let vec = vec![(1, 'a'), (2, 'b'), (3, 'c')]; - let map: HashMap<_, _> = vec.into_iter().collect(); - let values: Vec<_> = map.values().cloned().collect(); - assert_eq!(values.len(), 3); - assert!(values.contains(&'a')); - assert!(values.contains(&'b')); - assert!(values.contains(&'c')); - } - - #[test] - fn test_find() { - let mut m = HashMap::new(); - assert!(m.get(&1).is_none()); - m.insert(1, 2); - match m.get(&1) { - None => panic!(), - Some(v) => assert_eq!(*v, 2) - } - } - - #[test] - fn test_eq() { - let mut m1 = HashMap::new(); - m1.insert(1, 2); - m1.insert(2, 3); - m1.insert(3, 4); - - let mut m2 = HashMap::new(); - m2.insert(1, 2); - m2.insert(2, 3); - - assert!(m1 != m2); - - m2.insert(3, 4); - - assert_eq!(m1, m2); - } - - #[test] - fn test_show() { - let mut map = HashMap::new(); - let empty: HashMap = HashMap::new(); - - map.insert(1, 2); - map.insert(3, 4); - - let map_str = format!("{:?}", map); - - assert!(map_str == "HashMap {1: 2, 3: 4}" || - map_str == "HashMap {3: 4, 1: 2}"); - assert_eq!(format!("{:?}", empty), "HashMap {}"); - } - - #[test] - fn test_expand() { - let mut m = HashMap::new(); - - assert_eq!(m.len(), 0); - assert!(m.is_empty()); - - let mut i = 0; - let old_cap = m.table.capacity(); - while old_cap == m.table.capacity() { - m.insert(i, i); - i += 1; - } - - assert_eq!(m.len(), i); - assert!(!m.is_empty()); - } - - #[test] - fn test_behavior_resize_policy() { - let mut m = HashMap::new(); - - assert_eq!(m.len(), 0); - assert_eq!(m.table.capacity(), 0); - assert!(m.is_empty()); - - m.insert(0, 0); - m.remove(&0); - assert!(m.is_empty()); - let initial_cap = m.table.capacity(); - m.reserve(initial_cap); - let cap = m.table.capacity(); - - assert_eq!(cap, initial_cap * 2); - - let mut i = 0; - for _ in 0..cap * 3 / 4 { - m.insert(i, i); - i += 1; - } - // three quarters full - - assert_eq!(m.len(), i); - assert_eq!(m.table.capacity(), cap); - - for _ in 0..cap / 4 { - m.insert(i, i); - i += 1; - } - // half full - - let new_cap = m.table.capacity(); - assert_eq!(new_cap, cap * 2); - - for _ in 0..cap / 2 - 1 { - i -= 1; - m.remove(&i); - assert_eq!(m.table.capacity(), new_cap); - } - // A little more than one quarter full. - m.shrink_to_fit(); - assert_eq!(m.table.capacity(), cap); - // again, a little more than half full - for _ in 0..cap / 2 - 1 { - i -= 1; - m.remove(&i); - } - m.shrink_to_fit(); - - assert_eq!(m.len(), i); - assert!(!m.is_empty()); - assert_eq!(m.table.capacity(), initial_cap); - } - - #[test] - fn test_reserve_shrink_to_fit() { - let mut m = HashMap::new(); - m.insert(0, 0); - m.remove(&0); - assert!(m.capacity() >= m.len()); - for i in 0..128 { - m.insert(i, i); - } - m.reserve(256); - - let usable_cap = m.capacity(); - for i in 128..(128 + 256) { - m.insert(i, i); - assert_eq!(m.capacity(), usable_cap); - } - - for i in 100..(128 + 256) { - assert_eq!(m.remove(&i), Some(i)); - } - m.shrink_to_fit(); - - assert_eq!(m.len(), 100); - assert!(!m.is_empty()); - assert!(m.capacity() >= m.len()); - - for i in 0..100 { - assert_eq!(m.remove(&i), Some(i)); - } - m.shrink_to_fit(); - m.insert(0, 0); - - assert_eq!(m.len(), 1); - assert!(m.capacity() >= m.len()); - assert_eq!(m.remove(&0), Some(0)); - } - - #[test] - fn test_from_iter() { - let xs = [(1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6)]; - - let map: HashMap<_, _> = xs.iter().cloned().collect(); - - for &(k, v) in &xs { - assert_eq!(map.get(&k), Some(&v)); - } - } - - #[test] - fn test_size_hint() { - let xs = [(1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6)]; - - let map: HashMap<_, _> = xs.iter().cloned().collect(); - - let mut iter = map.iter(); - - for _ in iter.by_ref().take(3) {} - - assert_eq!(iter.size_hint(), (3, Some(3))); - } - - #[test] - fn test_iter_len() { - let xs = [(1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6)]; - - let map: HashMap<_, _> = xs.iter().cloned().collect(); - - let mut iter = map.iter(); - - for _ in iter.by_ref().take(3) {} - - assert_eq!(iter.len(), 3); - } - - #[test] - fn test_mut_size_hint() { - let xs = [(1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6)]; - - let mut map: HashMap<_, _> = xs.iter().cloned().collect(); - - let mut iter = map.iter_mut(); - - for _ in iter.by_ref().take(3) {} - - assert_eq!(iter.size_hint(), (3, Some(3))); - } - - #[test] - fn test_iter_mut_len() { - let xs = [(1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6)]; - - let mut map: HashMap<_, _> = xs.iter().cloned().collect(); - - let mut iter = map.iter_mut(); - - for _ in iter.by_ref().take(3) {} - - assert_eq!(iter.len(), 3); - } - - #[test] - fn test_index() { - let mut map = HashMap::new(); - - map.insert(1, 2); - map.insert(2, 1); - map.insert(3, 4); - - assert_eq!(map[2], 1); - } - - #[test] - #[should_fail] - fn test_index_nonexistent() { - let mut map = HashMap::new(); - - map.insert(1, 2); - map.insert(2, 1); - map.insert(3, 4); - - map[4]; - } - - #[test] - fn test_entry(){ - let xs = [(1, 10), (2, 20), (3, 30), (4, 40), (5, 50), (6, 60)]; - - let mut map: HashMap<_, _> = xs.iter().cloned().collect(); - - // Existing key (insert) - match map.entry(1) { - Vacant(_) => unreachable!(), - Occupied(mut view) => { - assert_eq!(view.get(), &10); - assert_eq!(view.insert(100), 10); - } - } - assert_eq!(map.get(&1).unwrap(), &100); - assert_eq!(map.len(), 6); - - - // Existing key (update) - match map.entry(2) { - Vacant(_) => unreachable!(), - Occupied(mut view) => { - let v = view.get_mut(); - let new_v = (*v) * 10; - *v = new_v; - } - } - assert_eq!(map.get(&2).unwrap(), &200); - assert_eq!(map.len(), 6); - - // Existing key (take) - match map.entry(3) { - Vacant(_) => unreachable!(), - Occupied(view) => { - assert_eq!(view.remove(), 30); - } - } - assert_eq!(map.get(&3), None); - assert_eq!(map.len(), 5); - - - // Inexistent key (insert) - match map.entry(10) { - Occupied(_) => unreachable!(), - Vacant(view) => { - assert_eq!(*view.insert(1000), 1000); - } - } - assert_eq!(map.get(&10).unwrap(), &1000); - assert_eq!(map.len(), 6); - } - - #[test] - fn test_entry_take_doesnt_corrupt() { - // Test for #19292 - fn check(m: &HashMap) { - for k in m.keys() { - assert!(m.contains_key(k), - "{} is in keys() but not in the map?", k); - } - } - - let mut m = HashMap::new(); - let mut rng = weak_rng(); - - // Populate the map with some items. - for _ in 0..50 { - let x = rng.gen_range(-10, 10); - m.insert(x, ()); - } - - for i in 0..1000 { - let x = rng.gen_range(-10, 10); - match m.entry(x) { - Vacant(_) => {}, - Occupied(e) => { - println!("{}: remove {}", i, x); - e.remove(); - }, - } - - check(&m); - } - } -} diff --git a/src/libstd/collections/hash/mod.rs b/src/libstd/collections/hash/mod.rs index 39c1458b72001..47e300af26981 100644 --- a/src/libstd/collections/hash/mod.rs +++ b/src/libstd/collections/hash/mod.rs @@ -12,14 +12,6 @@ mod bench; mod table; -#[cfg(stage0)] -#[path = "map_stage0.rs"] pub mod map; -#[cfg(not(stage0))] -pub mod map; -#[cfg(stage0)] -#[path = "set_stage0.rs"] -pub mod set; -#[cfg(not(stage0))] pub mod set; pub mod state; diff --git a/src/libstd/collections/hash/set_stage0.rs b/src/libstd/collections/hash/set_stage0.rs deleted file mode 100644 index 68c9e02d8ad72..0000000000000 --- a/src/libstd/collections/hash/set_stage0.rs +++ /dev/null @@ -1,1252 +0,0 @@ -// Copyright 2014 The Rust Project Developers. See the COPYRIGHT -// file at the top-level directory of this distribution and at -// http://rust-lang.org/COPYRIGHT. -// -// Licensed under the Apache License, Version 2.0 or the MIT license -// , at your -// option. This file may not be copied, modified, or distributed -// except according to those terms. -// -// ignore-lexer-test FIXME #15883 - -use borrow::Borrow; -use clone::Clone; -use cmp::{Eq, PartialEq}; -use core::marker::Sized; -use default::Default; -use fmt::Debug; -use fmt; -use hash::{self, Hash}; -use iter::{ - Iterator, IntoIterator, ExactSizeIterator, IteratorExt, FromIterator, Map, Chain, Extend, -}; -use ops::{BitOr, BitAnd, BitXor, Sub}; -use option::Option::{Some, None, self}; - -use super::map::{self, HashMap, Keys, INITIAL_CAPACITY, RandomState, Hasher}; -use super::state::HashState; - -// Future Optimization (FIXME!) -// ============================= -// -// Iteration over zero sized values is a noop. There is no need -// for `bucket.val` in the case of HashSet. I suppose we would need HKT -// to get rid of it properly. - -/// An implementation of a hash set using the underlying representation of a -/// HashMap where the value is (). As with the `HashMap` type, a `HashSet` -/// requires that the elements implement the `Eq` and `Hash` traits. -/// -/// # Example -/// -/// ``` -/// use std::collections::HashSet; -/// // Type inference lets us omit an explicit type signature (which -/// // would be `HashSet<&str>` in this example). -/// let mut books = HashSet::new(); -/// -/// // Add some books. -/// books.insert("A Dance With Dragons"); -/// books.insert("To Kill a Mockingbird"); -/// books.insert("The Odyssey"); -/// books.insert("The Great Gatsby"); -/// -/// // Check for a specific one. -/// if !books.contains(&("The Winds of Winter")) { -/// println!("We have {} books, but The Winds of Winter ain't one.", -/// books.len()); -/// } -/// -/// // Remove a book. -/// books.remove(&"The Odyssey"); -/// -/// // Iterate over everything. -/// for book in books.iter() { -/// println!("{}", *book); -/// } -/// ``` -/// -/// The easiest way to use `HashSet` with a custom type is to derive -/// `Eq` and `Hash`. We must also derive `PartialEq`, this will in the -/// future be implied by `Eq`. -/// -/// ``` -/// use std::collections::HashSet; -/// #[derive(Hash, Eq, PartialEq, Debug)] -/// struct Viking<'a> { -/// name: &'a str, -/// power: usize, -/// } -/// -/// let mut vikings = HashSet::new(); -/// -/// vikings.insert(Viking { name: "Einar", power: 9 }); -/// vikings.insert(Viking { name: "Einar", power: 9 }); -/// vikings.insert(Viking { name: "Olaf", power: 4 }); -/// vikings.insert(Viking { name: "Harald", power: 8 }); -/// -/// // Use derived implementation to print the vikings. -/// for x in vikings.iter() { -/// println!("{:?}", x); -/// } -/// ``` -#[derive(Clone)] -#[stable(feature = "rust1", since = "1.0.0")] -pub struct HashSet { - map: HashMap -} - -impl + Eq> HashSet { - /// Create an empty HashSet. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashSet; - /// let mut set: HashSet = HashSet::new(); - /// ``` - #[inline] - #[stable(feature = "rust1", since = "1.0.0")] - pub fn new() -> HashSet { - HashSet::with_capacity(INITIAL_CAPACITY) - } - - /// Create an empty HashSet with space for at least `n` elements in - /// the hash table. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashSet; - /// let mut set: HashSet = HashSet::with_capacity(10); - /// ``` - #[inline] - #[stable(feature = "rust1", since = "1.0.0")] - pub fn with_capacity(capacity: usize) -> HashSet { - HashSet { map: HashMap::with_capacity(capacity) } - } -} - -impl HashSet - where T: Eq + Hash, - S: HashState, - H: hash::Hasher -{ - /// Creates a new empty hash set which will use the given hasher to hash - /// keys. - /// - /// The hash set is also created with the default initial capacity. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashSet; - /// use std::collections::hash_map::RandomState; - /// - /// let s = RandomState::new(); - /// let mut set = HashSet::with_hash_state(s); - /// set.insert(2); - /// ``` - #[inline] - #[unstable(feature = "std_misc", reason = "hasher stuff is unclear")] - pub fn with_hash_state(hash_state: S) -> HashSet { - HashSet::with_capacity_and_hash_state(INITIAL_CAPACITY, hash_state) - } - - /// Create an empty HashSet with space for at least `capacity` - /// elements in the hash table, using `hasher` to hash the keys. - /// - /// Warning: `hasher` is normally randomly generated, and - /// is designed to allow `HashSet`s to be resistant to attacks that - /// cause many collisions and very poor performance. Setting it - /// manually using this function can expose a DoS attack vector. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashSet; - /// use std::collections::hash_map::RandomState; - /// - /// let s = RandomState::new(); - /// let mut set = HashSet::with_capacity_and_hash_state(10, s); - /// set.insert(1); - /// ``` - #[inline] - #[unstable(feature = "std_misc", reason = "hasher stuff is unclear")] - pub fn with_capacity_and_hash_state(capacity: usize, hash_state: S) - -> HashSet { - HashSet { - map: HashMap::with_capacity_and_hash_state(capacity, hash_state), - } - } - - /// Returns the number of elements the set can hold without reallocating. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashSet; - /// let set: HashSet = HashSet::with_capacity(100); - /// assert!(set.capacity() >= 100); - /// ``` - #[inline] - #[stable(feature = "rust1", since = "1.0.0")] - pub fn capacity(&self) -> usize { - self.map.capacity() - } - - /// Reserves capacity for at least `additional` more elements to be inserted - /// in the `HashSet`. The collection may reserve more space to avoid - /// frequent reallocations. - /// - /// # Panics - /// - /// Panics if the new allocation size overflows `usize`. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashSet; - /// let mut set: HashSet = HashSet::new(); - /// set.reserve(10); - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn reserve(&mut self, additional: usize) { - self.map.reserve(additional) - } - - /// Shrinks the capacity of the set as much as possible. It will drop - /// down as much as possible while maintaining the internal rules - /// and possibly leaving some space in accordance with the resize policy. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashSet; - /// - /// let mut set: HashSet = HashSet::with_capacity(100); - /// set.insert(1); - /// set.insert(2); - /// assert!(set.capacity() >= 100); - /// set.shrink_to_fit(); - /// assert!(set.capacity() >= 2); - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn shrink_to_fit(&mut self) { - self.map.shrink_to_fit() - } - - /// An iterator visiting all elements in arbitrary order. - /// Iterator element type is &'a T. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashSet; - /// let mut set = HashSet::new(); - /// set.insert("a"); - /// set.insert("b"); - /// - /// // Will print in an arbitrary order. - /// for x in set.iter() { - /// println!("{}", x); - /// } - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn iter(&self) -> Iter { - Iter { iter: self.map.keys() } - } - - /// Creates a consuming iterator, that is, one that moves each value out - /// of the set in arbitrary order. The set cannot be used after calling - /// this. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashSet; - /// let mut set = HashSet::new(); - /// set.insert("a".to_string()); - /// set.insert("b".to_string()); - /// - /// // Not possible to collect to a Vec with a regular `.iter()`. - /// let v: Vec = set.into_iter().collect(); - /// - /// // Will print in an arbitrary order. - /// for x in v.iter() { - /// println!("{}", x); - /// } - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn into_iter(self) -> IntoIter { - fn first((a, _): (A, B)) -> A { a } - let first: fn((T, ())) -> T = first; - - IntoIter { iter: self.map.into_iter().map(first) } - } - - /// Visit the values representing the difference. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashSet; - /// let a: HashSet = [1, 2, 3].iter().map(|&x| x).collect(); - /// let b: HashSet = [4, 2, 3, 4].iter().map(|&x| x).collect(); - /// - /// // Can be seen as `a - b`. - /// for x in a.difference(&b) { - /// println!("{}", x); // Print 1 - /// } - /// - /// let diff: HashSet = a.difference(&b).map(|&x| x).collect(); - /// assert_eq!(diff, [1].iter().map(|&x| x).collect()); - /// - /// // Note that difference is not symmetric, - /// // and `b - a` means something else: - /// let diff: HashSet = b.difference(&a).map(|&x| x).collect(); - /// assert_eq!(diff, [4].iter().map(|&x| x).collect()); - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn difference<'a>(&'a self, other: &'a HashSet) -> Difference<'a, T, S> { - Difference { - iter: self.iter(), - other: other, - } - } - - /// Visit the values representing the symmetric difference. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashSet; - /// let a: HashSet = [1, 2, 3].iter().map(|&x| x).collect(); - /// let b: HashSet = [4, 2, 3, 4].iter().map(|&x| x).collect(); - /// - /// // Print 1, 4 in arbitrary order. - /// for x in a.symmetric_difference(&b) { - /// println!("{}", x); - /// } - /// - /// let diff1: HashSet = a.symmetric_difference(&b).map(|&x| x).collect(); - /// let diff2: HashSet = b.symmetric_difference(&a).map(|&x| x).collect(); - /// - /// assert_eq!(diff1, diff2); - /// assert_eq!(diff1, [1, 4].iter().map(|&x| x).collect()); - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn symmetric_difference<'a>(&'a self, other: &'a HashSet) - -> SymmetricDifference<'a, T, S> { - SymmetricDifference { iter: self.difference(other).chain(other.difference(self)) } - } - - /// Visit the values representing the intersection. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashSet; - /// let a: HashSet = [1, 2, 3].iter().map(|&x| x).collect(); - /// let b: HashSet = [4, 2, 3, 4].iter().map(|&x| x).collect(); - /// - /// // Print 2, 3 in arbitrary order. - /// for x in a.intersection(&b) { - /// println!("{}", x); - /// } - /// - /// let diff: HashSet = a.intersection(&b).map(|&x| x).collect(); - /// assert_eq!(diff, [2, 3].iter().map(|&x| x).collect()); - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn intersection<'a>(&'a self, other: &'a HashSet) -> Intersection<'a, T, S> { - Intersection { - iter: self.iter(), - other: other, - } - } - - /// Visit the values representing the union. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashSet; - /// let a: HashSet = [1, 2, 3].iter().map(|&x| x).collect(); - /// let b: HashSet = [4, 2, 3, 4].iter().map(|&x| x).collect(); - /// - /// // Print 1, 2, 3, 4 in arbitrary order. - /// for x in a.union(&b) { - /// println!("{}", x); - /// } - /// - /// let diff: HashSet = a.union(&b).map(|&x| x).collect(); - /// assert_eq!(diff, [1, 2, 3, 4].iter().map(|&x| x).collect()); - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn union<'a>(&'a self, other: &'a HashSet) -> Union<'a, T, S> { - Union { iter: self.iter().chain(other.difference(self)) } - } - - /// Return the number of elements in the set - /// - /// # Example - /// - /// ``` - /// use std::collections::HashSet; - /// - /// let mut v = HashSet::new(); - /// assert_eq!(v.len(), 0); - /// v.insert(1); - /// assert_eq!(v.len(), 1); - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn len(&self) -> usize { self.map.len() } - - /// Returns true if the set contains no elements - /// - /// # Example - /// - /// ``` - /// use std::collections::HashSet; - /// - /// let mut v = HashSet::new(); - /// assert!(v.is_empty()); - /// v.insert(1); - /// assert!(!v.is_empty()); - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn is_empty(&self) -> bool { self.map.len() == 0 } - - /// Clears the set, returning all elements in an iterator. - #[inline] - #[unstable(feature = "std_misc", - reason = "matches collection reform specification, waiting for dust to settle")] - pub fn drain(&mut self) -> Drain { - fn first((a, _): (A, B)) -> A { a } - let first: fn((T, ())) -> T = first; // coerce to fn pointer - - Drain { iter: self.map.drain().map(first) } - } - - /// Clears the set, removing all values. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashSet; - /// - /// let mut v = HashSet::new(); - /// v.insert(1); - /// v.clear(); - /// assert!(v.is_empty()); - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn clear(&mut self) { self.map.clear() } - - /// Returns `true` if the set contains a value. - /// - /// The value may be any borrowed form of the set's value type, but - /// `Hash` and `Eq` on the borrowed form *must* match those for - /// the value type. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashSet; - /// - /// let set: HashSet<_> = [1, 2, 3].iter().cloned().collect(); - /// assert_eq!(set.contains(&1), true); - /// assert_eq!(set.contains(&4), false); - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn contains(&self, value: &Q) -> bool - where T: Borrow, Q: Hash + Eq - { - self.map.contains_key(value) - } - - /// Returns `true` if the set has no elements in common with `other`. - /// This is equivalent to checking for an empty intersection. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashSet; - /// - /// let a: HashSet<_> = [1, 2, 3].iter().cloned().collect(); - /// let mut b = HashSet::new(); - /// - /// assert_eq!(a.is_disjoint(&b), true); - /// b.insert(4); - /// assert_eq!(a.is_disjoint(&b), true); - /// b.insert(1); - /// assert_eq!(a.is_disjoint(&b), false); - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn is_disjoint(&self, other: &HashSet) -> bool { - self.iter().all(|v| !other.contains(v)) - } - - /// Returns `true` if the set is a subset of another. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashSet; - /// - /// let sup: HashSet<_> = [1, 2, 3].iter().cloned().collect(); - /// let mut set = HashSet::new(); - /// - /// assert_eq!(set.is_subset(&sup), true); - /// set.insert(2); - /// assert_eq!(set.is_subset(&sup), true); - /// set.insert(4); - /// assert_eq!(set.is_subset(&sup), false); - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn is_subset(&self, other: &HashSet) -> bool { - self.iter().all(|v| other.contains(v)) - } - - /// Returns `true` if the set is a superset of another. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashSet; - /// - /// let sub: HashSet<_> = [1, 2].iter().cloned().collect(); - /// let mut set = HashSet::new(); - /// - /// assert_eq!(set.is_superset(&sub), false); - /// - /// set.insert(0); - /// set.insert(1); - /// assert_eq!(set.is_superset(&sub), false); - /// - /// set.insert(2); - /// assert_eq!(set.is_superset(&sub), true); - /// ``` - #[inline] - #[stable(feature = "rust1", since = "1.0.0")] - pub fn is_superset(&self, other: &HashSet) -> bool { - other.is_subset(self) - } - - /// Adds a value to the set. Returns `true` if the value was not already - /// present in the set. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashSet; - /// - /// let mut set = HashSet::new(); - /// - /// assert_eq!(set.insert(2), true); - /// assert_eq!(set.insert(2), false); - /// assert_eq!(set.len(), 1); - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn insert(&mut self, value: T) -> bool { self.map.insert(value, ()).is_none() } - - /// Removes a value from the set. Returns `true` if the value was - /// present in the set. - /// - /// The value may be any borrowed form of the set's value type, but - /// `Hash` and `Eq` on the borrowed form *must* match those for - /// the value type. - /// - /// # Example - /// - /// ``` - /// use std::collections::HashSet; - /// - /// let mut set = HashSet::new(); - /// - /// set.insert(2); - /// assert_eq!(set.remove(&2), true); - /// assert_eq!(set.remove(&2), false); - /// ``` - #[stable(feature = "rust1", since = "1.0.0")] - pub fn remove(&mut self, value: &Q) -> bool - where T: Borrow, Q: Hash + Eq - { - self.map.remove(value).is_some() - } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl PartialEq for HashSet - where T: Eq + Hash, - S: HashState, - H: hash::Hasher -{ - fn eq(&self, other: &HashSet) -> bool { - if self.len() != other.len() { return false; } - - self.iter().all(|key| other.contains(key)) - } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl Eq for HashSet - where T: Eq + Hash, - S: HashState, - H: hash::Hasher -{} - -#[stable(feature = "rust1", since = "1.0.0")] -impl fmt::Debug for HashSet - where T: Eq + Hash + fmt::Debug, - S: HashState, - H: hash::Hasher -{ - fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { - try!(write!(f, "HashSet {{")); - - for (i, x) in self.iter().enumerate() { - if i != 0 { try!(write!(f, ", ")); } - try!(write!(f, "{:?}", *x)); - } - - write!(f, "}}") - } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl FromIterator for HashSet - where T: Eq + Hash, - S: HashState + Default, - H: hash::Hasher -{ - fn from_iter>(iter: I) -> HashSet { - let iter = iter.into_iter(); - let lower = iter.size_hint().0; - let mut set = HashSet::with_capacity_and_hash_state(lower, Default::default()); - set.extend(iter); - set - } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl Extend for HashSet - where T: Eq + Hash, - S: HashState, - H: hash::Hasher -{ - fn extend>(&mut self, iter: I) { - for k in iter { - self.insert(k); - } - } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl Default for HashSet - where T: Eq + Hash, - S: HashState + Default, - H: hash::Hasher -{ - #[stable(feature = "rust1", since = "1.0.0")] - fn default() -> HashSet { - HashSet::with_hash_state(Default::default()) - } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, 'b, T, S, H> BitOr<&'b HashSet> for &'a HashSet - where T: Eq + Hash + Clone, - S: HashState + Default, - H: hash::Hasher -{ - type Output = HashSet; - - /// Returns the union of `self` and `rhs` as a new `HashSet`. - /// - /// # Examples - /// - /// ``` - /// use std::collections::HashSet; - /// - /// let a: HashSet<_> = vec![1, 2, 3].into_iter().collect(); - /// let b: HashSet<_> = vec![3, 4, 5].into_iter().collect(); - /// - /// let set = &a | &b; - /// - /// let mut i = 0; - /// let expected = [1, 2, 3, 4, 5]; - /// for x in set.iter() { - /// assert!(expected.contains(x)); - /// i += 1; - /// } - /// assert_eq!(i, expected.len()); - /// ``` - fn bitor(self, rhs: &HashSet) -> HashSet { - self.union(rhs).cloned().collect() - } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, 'b, T, S, H> BitAnd<&'b HashSet> for &'a HashSet - where T: Eq + Hash + Clone, - S: HashState + Default, - H: hash::Hasher -{ - type Output = HashSet; - - /// Returns the intersection of `self` and `rhs` as a new `HashSet`. - /// - /// # Examples - /// - /// ``` - /// use std::collections::HashSet; - /// - /// let a: HashSet<_> = vec![1, 2, 3].into_iter().collect(); - /// let b: HashSet<_> = vec![2, 3, 4].into_iter().collect(); - /// - /// let set = &a & &b; - /// - /// let mut i = 0; - /// let expected = [2, 3]; - /// for x in set.iter() { - /// assert!(expected.contains(x)); - /// i += 1; - /// } - /// assert_eq!(i, expected.len()); - /// ``` - fn bitand(self, rhs: &HashSet) -> HashSet { - self.intersection(rhs).cloned().collect() - } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, 'b, T, S, H> BitXor<&'b HashSet> for &'a HashSet - where T: Eq + Hash + Clone, - S: HashState + Default, - H: hash::Hasher -{ - type Output = HashSet; - - /// Returns the symmetric difference of `self` and `rhs` as a new `HashSet`. - /// - /// # Examples - /// - /// ``` - /// use std::collections::HashSet; - /// - /// let a: HashSet<_> = vec![1, 2, 3].into_iter().collect(); - /// let b: HashSet<_> = vec![3, 4, 5].into_iter().collect(); - /// - /// let set = &a ^ &b; - /// - /// let mut i = 0; - /// let expected = [1, 2, 4, 5]; - /// for x in set.iter() { - /// assert!(expected.contains(x)); - /// i += 1; - /// } - /// assert_eq!(i, expected.len()); - /// ``` - fn bitxor(self, rhs: &HashSet) -> HashSet { - self.symmetric_difference(rhs).cloned().collect() - } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, 'b, T, S, H> Sub<&'b HashSet> for &'a HashSet - where T: Eq + Hash + Clone, - S: HashState + Default, - H: hash::Hasher -{ - type Output = HashSet; - - /// Returns the difference of `self` and `rhs` as a new `HashSet`. - /// - /// # Examples - /// - /// ``` - /// use std::collections::HashSet; - /// - /// let a: HashSet<_> = vec![1, 2, 3].into_iter().collect(); - /// let b: HashSet<_> = vec![3, 4, 5].into_iter().collect(); - /// - /// let set = &a - &b; - /// - /// let mut i = 0; - /// let expected = [1, 2]; - /// for x in set.iter() { - /// assert!(expected.contains(x)); - /// i += 1; - /// } - /// assert_eq!(i, expected.len()); - /// ``` - fn sub(self, rhs: &HashSet) -> HashSet { - self.difference(rhs).cloned().collect() - } -} - -/// HashSet iterator -#[stable(feature = "rust1", since = "1.0.0")] -pub struct Iter<'a, K: 'a> { - iter: Keys<'a, K, ()> -} - -/// HashSet move iterator -#[stable(feature = "rust1", since = "1.0.0")] -pub struct IntoIter { - iter: Map, fn((K, ())) -> K> -} - -/// HashSet drain iterator -#[stable(feature = "rust1", since = "1.0.0")] -pub struct Drain<'a, K: 'a> { - iter: Map, fn((K, ())) -> K>, -} - -/// Intersection iterator -#[stable(feature = "rust1", since = "1.0.0")] -pub struct Intersection<'a, T: 'a, S: 'a> { - // iterator of the first set - iter: Iter<'a, T>, - // the second set - other: &'a HashSet, -} - -/// Difference iterator -#[stable(feature = "rust1", since = "1.0.0")] -pub struct Difference<'a, T: 'a, S: 'a> { - // iterator of the first set - iter: Iter<'a, T>, - // the second set - other: &'a HashSet, -} - -/// Symmetric difference iterator. -#[stable(feature = "rust1", since = "1.0.0")] -pub struct SymmetricDifference<'a, T: 'a, S: 'a> { - iter: Chain, Difference<'a, T, S>> -} - -/// Set union iterator. -#[stable(feature = "rust1", since = "1.0.0")] -pub struct Union<'a, T: 'a, S: 'a> { - iter: Chain, Difference<'a, T, S>> -} - -impl<'a, T, S, H> IntoIterator for &'a HashSet - where T: Eq + Hash, - S: HashState, - H: hash::Hasher -{ - type Item = &'a T; - type IntoIter = Iter<'a, T>; - - fn into_iter(self) -> Iter<'a, T> { - self.iter() - } -} - -impl IntoIterator for HashSet - where T: Eq + Hash, - S: HashState, - H: hash::Hasher -{ - type Item = T; - type IntoIter = IntoIter; - - fn into_iter(self) -> IntoIter { - self.into_iter() - } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, K> Iterator for Iter<'a, K> { - type Item = &'a K; - - fn next(&mut self) -> Option<&'a K> { self.iter.next() } - fn size_hint(&self) -> (usize, Option) { self.iter.size_hint() } -} -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, K> ExactSizeIterator for Iter<'a, K> { - fn len(&self) -> usize { self.iter.len() } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl Iterator for IntoIter { - type Item = K; - - fn next(&mut self) -> Option { self.iter.next() } - fn size_hint(&self) -> (usize, Option) { self.iter.size_hint() } -} -#[stable(feature = "rust1", since = "1.0.0")] -impl ExactSizeIterator for IntoIter { - fn len(&self) -> usize { self.iter.len() } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, K> Iterator for Drain<'a, K> { - type Item = K; - - fn next(&mut self) -> Option { self.iter.next() } - fn size_hint(&self) -> (usize, Option) { self.iter.size_hint() } -} -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, K> ExactSizeIterator for Drain<'a, K> { - fn len(&self) -> usize { self.iter.len() } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, T, S, H> Iterator for Intersection<'a, T, S> - where T: Eq + Hash, - S: HashState, - H: hash::Hasher -{ - type Item = &'a T; - - fn next(&mut self) -> Option<&'a T> { - loop { - match self.iter.next() { - None => return None, - Some(elt) => if self.other.contains(elt) { - return Some(elt) - }, - } - } - } - - fn size_hint(&self) -> (usize, Option) { - let (_, upper) = self.iter.size_hint(); - (0, upper) - } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, T, S, H> Iterator for Difference<'a, T, S> - where T: Eq + Hash, - S: HashState, - H: hash::Hasher -{ - type Item = &'a T; - - fn next(&mut self) -> Option<&'a T> { - loop { - match self.iter.next() { - None => return None, - Some(elt) => if !self.other.contains(elt) { - return Some(elt) - }, - } - } - } - - fn size_hint(&self) -> (usize, Option) { - let (_, upper) = self.iter.size_hint(); - (0, upper) - } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, T, S, H> Iterator for SymmetricDifference<'a, T, S> - where T: Eq + Hash, - S: HashState, - H: hash::Hasher -{ - type Item = &'a T; - - fn next(&mut self) -> Option<&'a T> { self.iter.next() } - fn size_hint(&self) -> (usize, Option) { self.iter.size_hint() } -} - -#[stable(feature = "rust1", since = "1.0.0")] -impl<'a, T, S, H> Iterator for Union<'a, T, S> - where T: Eq + Hash, - S: HashState, - H: hash::Hasher -{ - type Item = &'a T; - - fn next(&mut self) -> Option<&'a T> { self.iter.next() } - fn size_hint(&self) -> (usize, Option) { self.iter.size_hint() } -} - -#[cfg(test)] -mod test_set { - use prelude::v1::*; - - use super::HashSet; - - #[test] - fn test_disjoint() { - let mut xs = HashSet::new(); - let mut ys = HashSet::new(); - assert!(xs.is_disjoint(&ys)); - assert!(ys.is_disjoint(&xs)); - assert!(xs.insert(5)); - assert!(ys.insert(11)); - assert!(xs.is_disjoint(&ys)); - assert!(ys.is_disjoint(&xs)); - assert!(xs.insert(7)); - assert!(xs.insert(19)); - assert!(xs.insert(4)); - assert!(ys.insert(2)); - assert!(ys.insert(-11)); - assert!(xs.is_disjoint(&ys)); - assert!(ys.is_disjoint(&xs)); - assert!(ys.insert(7)); - assert!(!xs.is_disjoint(&ys)); - assert!(!ys.is_disjoint(&xs)); - } - - #[test] - fn test_subset_and_superset() { - let mut a = HashSet::new(); - assert!(a.insert(0)); - assert!(a.insert(5)); - assert!(a.insert(11)); - assert!(a.insert(7)); - - let mut b = HashSet::new(); - assert!(b.insert(0)); - assert!(b.insert(7)); - assert!(b.insert(19)); - assert!(b.insert(250)); - assert!(b.insert(11)); - assert!(b.insert(200)); - - assert!(!a.is_subset(&b)); - assert!(!a.is_superset(&b)); - assert!(!b.is_subset(&a)); - assert!(!b.is_superset(&a)); - - assert!(b.insert(5)); - - assert!(a.is_subset(&b)); - assert!(!a.is_superset(&b)); - assert!(!b.is_subset(&a)); - assert!(b.is_superset(&a)); - } - - #[test] - fn test_iterate() { - let mut a = HashSet::new(); - for i in 0..32 { - assert!(a.insert(i)); - } - let mut observed: u32 = 0; - for k in &a { - observed |= 1 << *k; - } - assert_eq!(observed, 0xFFFF_FFFF); - } - - #[test] - fn test_intersection() { - let mut a = HashSet::new(); - let mut b = HashSet::new(); - - assert!(a.insert(11)); - assert!(a.insert(1)); - assert!(a.insert(3)); - assert!(a.insert(77)); - assert!(a.insert(103)); - assert!(a.insert(5)); - assert!(a.insert(-5)); - - assert!(b.insert(2)); - assert!(b.insert(11)); - assert!(b.insert(77)); - assert!(b.insert(-9)); - assert!(b.insert(-42)); - assert!(b.insert(5)); - assert!(b.insert(3)); - - let mut i = 0; - let expected = [3, 5, 11, 77]; - for x in a.intersection(&b) { - assert!(expected.contains(x)); - i += 1 - } - assert_eq!(i, expected.len()); - } - - #[test] - fn test_difference() { - let mut a = HashSet::new(); - let mut b = HashSet::new(); - - assert!(a.insert(1)); - assert!(a.insert(3)); - assert!(a.insert(5)); - assert!(a.insert(9)); - assert!(a.insert(11)); - - assert!(b.insert(3)); - assert!(b.insert(9)); - - let mut i = 0; - let expected = [1, 5, 11]; - for x in a.difference(&b) { - assert!(expected.contains(x)); - i += 1 - } - assert_eq!(i, expected.len()); - } - - #[test] - fn test_symmetric_difference() { - let mut a = HashSet::new(); - let mut b = HashSet::new(); - - assert!(a.insert(1)); - assert!(a.insert(3)); - assert!(a.insert(5)); - assert!(a.insert(9)); - assert!(a.insert(11)); - - assert!(b.insert(-2)); - assert!(b.insert(3)); - assert!(b.insert(9)); - assert!(b.insert(14)); - assert!(b.insert(22)); - - let mut i = 0; - let expected = [-2, 1, 5, 11, 14, 22]; - for x in a.symmetric_difference(&b) { - assert!(expected.contains(x)); - i += 1 - } - assert_eq!(i, expected.len()); - } - - #[test] - fn test_union() { - let mut a = HashSet::new(); - let mut b = HashSet::new(); - - assert!(a.insert(1)); - assert!(a.insert(3)); - assert!(a.insert(5)); - assert!(a.insert(9)); - assert!(a.insert(11)); - assert!(a.insert(16)); - assert!(a.insert(19)); - assert!(a.insert(24)); - - assert!(b.insert(-2)); - assert!(b.insert(1)); - assert!(b.insert(5)); - assert!(b.insert(9)); - assert!(b.insert(13)); - assert!(b.insert(19)); - - let mut i = 0; - let expected = [-2, 1, 3, 5, 9, 11, 13, 16, 19, 24]; - for x in a.union(&b) { - assert!(expected.contains(x)); - i += 1 - } - assert_eq!(i, expected.len()); - } - - #[test] - fn test_from_iter() { - let xs = [1, 2, 3, 4, 5, 6, 7, 8, 9]; - - let set: HashSet<_> = xs.iter().cloned().collect(); - - for x in &xs { - assert!(set.contains(x)); - } - } - - #[test] - fn test_move_iter() { - let hs = { - let mut hs = HashSet::new(); - - hs.insert('a'); - hs.insert('b'); - - hs - }; - - let v = hs.into_iter().collect::>(); - assert!(['a', 'b'] == v || ['b', 'a'] == v); - } - - #[test] - fn test_eq() { - // These constants once happened to expose a bug in insert(). - // I'm keeping them around to prevent a regression. - let mut s1 = HashSet::new(); - - s1.insert(1); - s1.insert(2); - s1.insert(3); - - let mut s2 = HashSet::new(); - - s2.insert(1); - s2.insert(2); - - assert!(s1 != s2); - - s2.insert(3); - - assert_eq!(s1, s2); - } - - #[test] - fn test_show() { - let mut set = HashSet::new(); - let empty = HashSet::::new(); - - set.insert(1); - set.insert(2); - - let set_str = format!("{:?}", set); - - assert!(set_str == "HashSet {1, 2}" || set_str == "HashSet {2, 1}"); - assert_eq!(format!("{:?}", empty), "HashSet {}"); - } - - #[test] - fn test_trivial_drain() { - let mut s = HashSet::::new(); - for _ in s.drain() {} - assert!(s.is_empty()); - drop(s); - - let mut s = HashSet::::new(); - drop(s.drain()); - assert!(s.is_empty()); - } - - #[test] - fn test_drain() { - let mut s: HashSet<_> = (1..100).collect(); - - // try this a bunch of times to make sure we don't screw up internal state. - for _ in 0..20 { - assert_eq!(s.len(), 99); - - { - let mut last_i = 0; - let mut d = s.drain(); - for (i, x) in d.by_ref().take(50).enumerate() { - last_i = i; - assert!(x != 0); - } - assert_eq!(last_i, 49); - } - - for _ in &s { panic!("s should be empty!"); } - - // reset to try again. - s.extend(1..100); - } - } -} diff --git a/src/libstd/collections/hash/table.rs b/src/libstd/collections/hash/table.rs index f301f6db92f96..7513cb8a61c7c 100644 --- a/src/libstd/collections/hash/table.rs +++ b/src/libstd/collections/hash/table.rs @@ -143,25 +143,6 @@ impl SafeHash { /// We need to remove hashes of 0. That's reserved for empty buckets. /// This function wraps up `hash_keyed` to be the only way outside this /// module to generate a SafeHash. -#[cfg(stage0)] -pub fn make_hash(hash_state: &S, t: &T) -> SafeHash - where T: Hash, - S: HashState, - H: Hasher -{ - let mut state = hash_state.hasher(); - t.hash(&mut state); - // We need to avoid 0u64 in order to prevent collisions with - // EMPTY_HASH. We can maintain our precious uniform distribution - // of initial indexes by unconditionally setting the MSB, - // effectively reducing 64-bits hashes to 63 bits. - SafeHash { hash: 0x8000_0000_0000_0000 | state.finish() } -} - -/// We need to remove hashes of 0. That's reserved for empty buckets. -/// This function wraps up `hash_keyed` to be the only way outside this -/// module to generate a SafeHash. -#[cfg(not(stage0))] pub fn make_hash(hash_state: &S, t: &T) -> SafeHash where T: Hash, S: HashState { diff --git a/src/libstd/ffi/os_str.rs b/src/libstd/ffi/os_str.rs index 84149a2eb8e43..fe0df1728efc0 100644 --- a/src/libstd/ffi/os_str.rs +++ b/src/libstd/ffi/os_str.rs @@ -41,7 +41,6 @@ use string::{String, CowString}; use ops; use cmp; use hash::{Hash, Hasher}; -#[cfg(stage0)] use hash::Writer; use old_path::{Path, GenericPath}; use sys::os_str::{Buf, Slice}; @@ -163,14 +162,6 @@ impl Ord for OsString { } } -#[cfg(stage0)] -impl<'a, S: Hasher + Writer> Hash for OsString { - #[inline] - fn hash(&self, state: &mut S) { - (&**self).hash(state) - } -} -#[cfg(not(stage0))] #[stable(feature = "rust1", since = "1.0.0")] impl Hash for OsString { #[inline] @@ -263,14 +254,6 @@ impl Ord for OsStr { fn cmp(&self, other: &OsStr) -> cmp::Ordering { self.bytes().cmp(other.bytes()) } } -#[cfg(stage0)] -impl<'a, S: Hasher + Writer> Hash for OsStr { - #[inline] - fn hash(&self, state: &mut S) { - self.bytes().hash(state) - } -} -#[cfg(not(stage0))] #[stable(feature = "rust1", since = "1.0.0")] impl Hash for OsStr { #[inline] diff --git a/src/libstd/fs.rs b/src/libstd/fs.rs index 69791084e2f9f..98c1b50a9bf14 100644 --- a/src/libstd/fs.rs +++ b/src/libstd/fs.rs @@ -744,6 +744,8 @@ pub fn set_permissions(path: &P, perm: Permissions) #[cfg(test)] mod tests { + #![allow(deprecated)] //rand + use prelude::v1::*; use io::prelude::*; @@ -1035,7 +1037,7 @@ mod tests { let msg = msg_str.as_bytes(); check!(w.write(msg)); } - let mut files = check!(fs::read_dir(dir)); + let files = check!(fs::read_dir(dir)); let mut mem = [0u8; 4]; for f in files { let f = f.unwrap().path(); @@ -1065,7 +1067,7 @@ mod tests { check!(fs::create_dir_all(dir2)); check!(File::create(&dir2.join("14"))); - let mut files = check!(fs::walk_dir(dir)); + let files = check!(fs::walk_dir(dir)); let mut cur = [0u8; 2]; for f in files { let f = f.unwrap().path(); diff --git a/src/libstd/io/buffered.rs b/src/libstd/io/buffered.rs index e9a8dbb4098af..9ef319782369d 100644 --- a/src/libstd/io/buffered.rs +++ b/src/libstd/io/buffered.rs @@ -497,7 +497,6 @@ mod tests { assert_eq!(*writer.get_ref(), [0, 1, 2, 3, 4, 5, 6, 7, 8]); writer.write(&[9, 10, 11]).unwrap(); - let a: &[_] = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]; assert_eq!(*writer.get_ref(), [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]); writer.flush().unwrap(); @@ -593,7 +592,7 @@ mod tests { #[test] fn test_lines() { let in_buf = b"a\nb\nc"; - let mut reader = BufReader::with_capacity(2, in_buf); + let reader = BufReader::with_capacity(2, in_buf); let mut it = reader.lines(); assert_eq!(it.next(), Some(Ok("a".to_string()))); assert_eq!(it.next(), Some(Ok("b".to_string()))); @@ -618,14 +617,14 @@ mod tests { #[test] fn read_char_buffered() { let buf = [195u8, 159u8]; - let mut reader = BufReader::with_capacity(1, &buf[..]); + let reader = BufReader::with_capacity(1, &buf[..]); assert_eq!(reader.chars().next(), Some(Ok('ß'))); } #[test] fn test_chars() { let buf = [195u8, 159u8, b'a']; - let mut reader = BufReader::with_capacity(1, &buf[..]); + let reader = BufReader::with_capacity(1, &buf[..]); let mut it = reader.chars(); assert_eq!(it.next(), Some(Ok('ß'))); assert_eq!(it.next(), Some(Ok('a'))); diff --git a/src/libstd/io/mod.rs b/src/libstd/io/mod.rs index 1133bd87a7db9..5b319f4c6876d 100644 --- a/src/libstd/io/mod.rs +++ b/src/libstd/io/mod.rs @@ -869,12 +869,12 @@ mod tests { #[test] fn split() { - let mut buf = Cursor::new(b"12"); + let buf = Cursor::new(b"12"); let mut s = buf.split(b'3'); assert_eq!(s.next(), Some(Ok(vec![b'1', b'2']))); assert_eq!(s.next(), None); - let mut buf = Cursor::new(b"1233"); + let buf = Cursor::new(b"1233"); let mut s = buf.split(b'3'); assert_eq!(s.next(), Some(Ok(vec![b'1', b'2']))); assert_eq!(s.next(), Some(Ok(vec![]))); @@ -902,12 +902,12 @@ mod tests { #[test] fn lines() { - let mut buf = Cursor::new(b"12"); + let buf = Cursor::new(b"12"); let mut s = buf.lines(); assert_eq!(s.next(), Some(Ok("12".to_string()))); assert_eq!(s.next(), None); - let mut buf = Cursor::new(b"12\n\n"); + let buf = Cursor::new(b"12\n\n"); let mut s = buf.lines(); assert_eq!(s.next(), Some(Ok("12".to_string()))); assert_eq!(s.next(), Some(Ok(String::new()))); diff --git a/src/libstd/lib.rs b/src/libstd/lib.rs index fbd403ea593b8..4b6e9cf76f9e5 100644 --- a/src/libstd/lib.rs +++ b/src/libstd/lib.rs @@ -109,7 +109,6 @@ #![feature(box_syntax)] #![feature(collections)] #![feature(core)] -#![feature(hash)] #![feature(int_uint)] #![feature(lang_items)] #![feature(libc)] @@ -123,7 +122,7 @@ #![feature(unsafe_destructor)] #![feature(unsafe_no_drop_flag)] #![feature(macro_reexport)] -#![cfg_attr(test, feature(test))] +#![cfg_attr(test, feature(test, rustc_private, env))] // Don't link to std. We are std. #![feature(no_std)] @@ -219,15 +218,15 @@ mod int_macros; #[macro_use] mod uint_macros; -#[path = "num/int.rs"] pub mod int; #[path = "num/isize.rs"] pub mod isize; +pub use isize as int; #[path = "num/i8.rs"] pub mod i8; #[path = "num/i16.rs"] pub mod i16; #[path = "num/i32.rs"] pub mod i32; #[path = "num/i64.rs"] pub mod i64; -#[path = "num/uint.rs"] pub mod uint; #[path = "num/usize.rs"] pub mod usize; +pub use usize as uint; #[path = "num/u8.rs"] pub mod u8; #[path = "num/u16.rs"] pub mod u16; #[path = "num/u32.rs"] pub mod u32; diff --git a/src/libstd/net/addr.rs b/src/libstd/net/addr.rs index 51944adf3b403..f16f501c46a19 100644 --- a/src/libstd/net/addr.rs +++ b/src/libstd/net/addr.rs @@ -147,21 +147,6 @@ impl PartialEq for Repr { } impl Eq for Repr {} -#[cfg(stage0)] -impl hash::Hash for Repr { - fn hash(&self, s: &mut S) { - match *self { - Repr::V4(ref a) => { - (a.sin_family, a.sin_port, a.sin_addr.s_addr).hash(s) - } - Repr::V6(ref a) => { - (a.sin6_family, a.sin6_port, &a.sin6_addr.s6_addr, - a.sin6_flowinfo, a.sin6_scope_id).hash(s) - } - } - } -} -#[cfg(not(stage0))] #[stable(feature = "rust1", since = "1.0.0")] impl hash::Hash for Repr { fn hash(&self, s: &mut H) { diff --git a/src/libstd/net/ip.rs b/src/libstd/net/ip.rs index 571a1b03ef07f..d699886e57747 100644 --- a/src/libstd/net/ip.rs +++ b/src/libstd/net/ip.rs @@ -189,13 +189,6 @@ impl PartialEq for Ipv4Addr { } impl Eq for Ipv4Addr {} -#[cfg(stage0)] -impl hash::Hash for Ipv4Addr { - fn hash(&self, s: &mut S) { - self.inner.s_addr.hash(s) - } -} -#[cfg(not(stage0))] #[stable(feature = "rust1", since = "1.0.0")] impl hash::Hash for Ipv4Addr { fn hash(&self, s: &mut H) { @@ -429,13 +422,6 @@ impl PartialEq for Ipv6Addr { } impl Eq for Ipv6Addr {} -#[cfg(stage0)] -impl hash::Hash for Ipv6Addr { - fn hash(&self, s: &mut S) { - self.inner.s6_addr.hash(s) - } -} -#[cfg(not(stage0))] #[stable(feature = "rust1", since = "1.0.0")] impl hash::Hash for Ipv6Addr { fn hash(&self, s: &mut H) { diff --git a/src/libstd/net/tcp.rs b/src/libstd/net/tcp.rs index b861b74947eeb..f99cd2b1d1be9 100644 --- a/src/libstd/net/tcp.rs +++ b/src/libstd/net/tcp.rs @@ -456,12 +456,6 @@ mod tests { } } - pub fn socket_name(addr: SocketAddr) { - } - - pub fn peer_name(addr: SocketAddr) { - } - #[test] fn socket_and_peer_name_ip4() { each_ip(&mut |addr| { diff --git a/src/libstd/net/test.rs b/src/libstd/net/test.rs index 971fb4b69c8ef..c70e92884ac65 100644 --- a/src/libstd/net/test.rs +++ b/src/libstd/net/test.rs @@ -33,7 +33,7 @@ fn base_port() -> u16 { let cwd = env::current_dir().unwrap(); let dirs = ["32-opt", "32-nopt", "64-opt", "64-nopt", "64-opt-vg", "all-opt", "snap3", "dist"]; - dirs.iter().enumerate().find(|&(i, dir)| { + dirs.iter().enumerate().find(|&(_, dir)| { cwd.as_str().unwrap().contains(dir) }).map(|p| p.0).unwrap_or(0) as u16 * 1000 + 19600 } diff --git a/src/libstd/num/int.rs b/src/libstd/num/int.rs deleted file mode 100644 index 669952eee3924..0000000000000 --- a/src/libstd/num/int.rs +++ /dev/null @@ -1,22 +0,0 @@ -// Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT -// file at the top-level directory of this distribution and at -// http://rust-lang.org/COPYRIGHT. -// -// Licensed under the Apache License, Version 2.0 or the MIT license -// , at your -// option. This file may not be copied, modified, or distributed -// except according to those terms. - -//! Deprecated: replaced by `isize`. -//! -//! The rollout of the new type will gradually take place over the -//! alpha cycle along with the development of clearer conventions -//! around integer types. - -#![unstable(feature = "std_misc")] -#![deprecated(since = "1.0.0", reason = "replaced by isize")] - -pub use core::int::{BITS, BYTES, MIN, MAX}; - -int_module! { int } diff --git a/src/libstd/num/mod.rs b/src/libstd/num/mod.rs index c94c164983329..968adfafeab73 100644 --- a/src/libstd/num/mod.rs +++ b/src/libstd/num/mod.rs @@ -11,7 +11,7 @@ //! Numeric traits and functions for generic mathematics //! //! These are implemented for the primitive numeric types in `std::{u8, u16, -//! u32, u64, uint, i8, i16, i32, i64, int, f32, f64}`. +//! u32, u64, usize, i8, i16, i32, i64, isize, f32, f64}`. #![stable(feature = "rust1", since = "1.0.0")] #![allow(missing_docs)] @@ -146,12 +146,12 @@ pub trait Float #[deprecated(since = "1.0.0", reason = "use `std::f32::MANTISSA_DIGITS` or \ `std::f64::MANTISSA_DIGITS` as appropriate")] - fn mantissa_digits(unused_self: Option) -> uint; + fn mantissa_digits(unused_self: Option) -> usize; /// Deprecated: use `std::f32::DIGITS` or `std::f64::DIGITS` instead. #[unstable(feature = "std_misc")] #[deprecated(since = "1.0.0", reason = "use `std::f32::DIGITS` or `std::f64::DIGITS` as appropriate")] - fn digits(unused_self: Option) -> uint; + fn digits(unused_self: Option) -> usize; /// Deprecated: use `std::f32::EPSILON` or `std::f64::EPSILON` instead. #[unstable(feature = "std_misc")] #[deprecated(since = "1.0.0", @@ -161,22 +161,22 @@ pub trait Float #[unstable(feature = "std_misc")] #[deprecated(since = "1.0.0", reason = "use `std::f32::MIN_EXP` or `std::f64::MIN_EXP` as appropriate")] - fn min_exp(unused_self: Option) -> int; + fn min_exp(unused_self: Option) -> isize; /// Deprecated: use `std::f32::MAX_EXP` or `std::f64::MAX_EXP` instead. #[unstable(feature = "std_misc")] #[deprecated(since = "1.0.0", reason = "use `std::f32::MAX_EXP` or `std::f64::MAX_EXP` as appropriate")] - fn max_exp(unused_self: Option) -> int; + fn max_exp(unused_self: Option) -> isize; /// Deprecated: use `std::f32::MIN_10_EXP` or `std::f64::MIN_10_EXP` instead. #[unstable(feature = "std_misc")] #[deprecated(since = "1.0.0", reason = "use `std::f32::MIN_10_EXP` or `std::f64::MIN_10_EXP` as appropriate")] - fn min_10_exp(unused_self: Option) -> int; + fn min_10_exp(unused_self: Option) -> isize; /// Deprecated: use `std::f32::MAX_10_EXP` or `std::f64::MAX_10_EXP` instead. #[unstable(feature = "std_misc")] #[deprecated(since = "1.0.0", reason = "use `std::f32::MAX_10_EXP` or `std::f64::MAX_10_EXP` as appropriate")] - fn max_10_exp(unused_self: Option) -> int; + fn max_10_exp(unused_self: Option) -> isize; /// Returns the smallest finite value that this type can represent. /// @@ -698,7 +698,7 @@ pub trait Float /// ``` #[unstable(feature = "std_misc", reason = "pending integer conventions")] - fn ldexp(x: Self, exp: int) -> Self; + fn ldexp(x: Self, exp: isize) -> Self; /// Breaks the number into a normalized fraction and a base-2 exponent, /// satisfying: /// @@ -720,7 +720,7 @@ pub trait Float /// ``` #[unstable(feature = "std_misc", reason = "pending integer conventions")] - fn frexp(self) -> (Self, int); + fn frexp(self) -> (Self, isize); /// Returns the next representable floating-point value in the direction of /// `other`. /// @@ -1112,12 +1112,12 @@ mod tests { use i16; use i32; use i64; - use int; + use isize; use u8; use u16; use u32; use u64; - use uint; + use usize; macro_rules! test_cast_20 { ($_20:expr) => ({ @@ -1179,25 +1179,25 @@ mod tests { #[test] fn test_cast_range_int_min() { - assert_eq!(int::MIN.to_int(), Some(int::MIN as int)); - assert_eq!(int::MIN.to_i8(), None); - assert_eq!(int::MIN.to_i16(), None); - // int::MIN.to_i32() is word-size specific - assert_eq!(int::MIN.to_i64(), Some(int::MIN as i64)); - assert_eq!(int::MIN.to_uint(), None); - assert_eq!(int::MIN.to_u8(), None); - assert_eq!(int::MIN.to_u16(), None); - assert_eq!(int::MIN.to_u32(), None); - assert_eq!(int::MIN.to_u64(), None); + assert_eq!(isize::MIN.to_int(), Some(isize::MIN as isize)); + assert_eq!(isize::MIN.to_i8(), None); + assert_eq!(isize::MIN.to_i16(), None); + // isize::MIN.to_i32() is word-size specific + assert_eq!(isize::MIN.to_i64(), Some(isize::MIN as i64)); + assert_eq!(isize::MIN.to_uint(), None); + assert_eq!(isize::MIN.to_u8(), None); + assert_eq!(isize::MIN.to_u16(), None); + assert_eq!(isize::MIN.to_u32(), None); + assert_eq!(isize::MIN.to_u64(), None); #[cfg(target_pointer_width = "32")] fn check_word_size() { - assert_eq!(int::MIN.to_i32(), Some(int::MIN as i32)); + assert_eq!(isize::MIN.to_i32(), Some(isize::MIN as i32)); } #[cfg(target_pointer_width = "64")] fn check_word_size() { - assert_eq!(int::MIN.to_i32(), None); + assert_eq!(isize::MIN.to_i32(), None); } check_word_size(); @@ -1205,7 +1205,7 @@ mod tests { #[test] fn test_cast_range_i8_min() { - assert_eq!(i8::MIN.to_int(), Some(i8::MIN as int)); + assert_eq!(i8::MIN.to_int(), Some(i8::MIN as isize)); assert_eq!(i8::MIN.to_i8(), Some(i8::MIN as i8)); assert_eq!(i8::MIN.to_i16(), Some(i8::MIN as i16)); assert_eq!(i8::MIN.to_i32(), Some(i8::MIN as i32)); @@ -1219,7 +1219,7 @@ mod tests { #[test] fn test_cast_range_i16_min() { - assert_eq!(i16::MIN.to_int(), Some(i16::MIN as int)); + assert_eq!(i16::MIN.to_int(), Some(i16::MIN as isize)); assert_eq!(i16::MIN.to_i8(), None); assert_eq!(i16::MIN.to_i16(), Some(i16::MIN as i16)); assert_eq!(i16::MIN.to_i32(), Some(i16::MIN as i32)); @@ -1233,7 +1233,7 @@ mod tests { #[test] fn test_cast_range_i32_min() { - assert_eq!(i32::MIN.to_int(), Some(i32::MIN as int)); + assert_eq!(i32::MIN.to_int(), Some(i32::MIN as isize)); assert_eq!(i32::MIN.to_i8(), None); assert_eq!(i32::MIN.to_i16(), None); assert_eq!(i32::MIN.to_i32(), Some(i32::MIN as i32)); @@ -1265,7 +1265,7 @@ mod tests { #[cfg(target_pointer_width = "64")] fn check_word_size() { - assert_eq!(i64::MIN.to_int(), Some(i64::MIN as int)); + assert_eq!(i64::MIN.to_int(), Some(i64::MIN as isize)); } check_word_size(); @@ -1273,26 +1273,26 @@ mod tests { #[test] fn test_cast_range_int_max() { - assert_eq!(int::MAX.to_int(), Some(int::MAX as int)); - assert_eq!(int::MAX.to_i8(), None); - assert_eq!(int::MAX.to_i16(), None); - // int::MAX.to_i32() is word-size specific - assert_eq!(int::MAX.to_i64(), Some(int::MAX as i64)); - assert_eq!(int::MAX.to_u8(), None); - assert_eq!(int::MAX.to_u16(), None); - // int::MAX.to_u32() is word-size specific - assert_eq!(int::MAX.to_u64(), Some(int::MAX as u64)); + assert_eq!(isize::MAX.to_int(), Some(isize::MAX as isize)); + assert_eq!(isize::MAX.to_i8(), None); + assert_eq!(isize::MAX.to_i16(), None); + // isize::MAX.to_i32() is word-size specific + assert_eq!(isize::MAX.to_i64(), Some(isize::MAX as i64)); + assert_eq!(isize::MAX.to_u8(), None); + assert_eq!(isize::MAX.to_u16(), None); + // isize::MAX.to_u32() is word-size specific + assert_eq!(isize::MAX.to_u64(), Some(isize::MAX as u64)); #[cfg(target_pointer_width = "32")] fn check_word_size() { - assert_eq!(int::MAX.to_i32(), Some(int::MAX as i32)); - assert_eq!(int::MAX.to_u32(), Some(int::MAX as u32)); + assert_eq!(isize::MAX.to_i32(), Some(isize::MAX as i32)); + assert_eq!(isize::MAX.to_u32(), Some(isize::MAX as u32)); } #[cfg(target_pointer_width = "64")] fn check_word_size() { - assert_eq!(int::MAX.to_i32(), None); - assert_eq!(int::MAX.to_u32(), None); + assert_eq!(isize::MAX.to_i32(), None); + assert_eq!(isize::MAX.to_u32(), None); } check_word_size(); @@ -1300,12 +1300,12 @@ mod tests { #[test] fn test_cast_range_i8_max() { - assert_eq!(i8::MAX.to_int(), Some(i8::MAX as int)); + assert_eq!(i8::MAX.to_int(), Some(i8::MAX as isize)); assert_eq!(i8::MAX.to_i8(), Some(i8::MAX as i8)); assert_eq!(i8::MAX.to_i16(), Some(i8::MAX as i16)); assert_eq!(i8::MAX.to_i32(), Some(i8::MAX as i32)); assert_eq!(i8::MAX.to_i64(), Some(i8::MAX as i64)); - assert_eq!(i8::MAX.to_uint(), Some(i8::MAX as uint)); + assert_eq!(i8::MAX.to_uint(), Some(i8::MAX as usize)); assert_eq!(i8::MAX.to_u8(), Some(i8::MAX as u8)); assert_eq!(i8::MAX.to_u16(), Some(i8::MAX as u16)); assert_eq!(i8::MAX.to_u32(), Some(i8::MAX as u32)); @@ -1314,12 +1314,12 @@ mod tests { #[test] fn test_cast_range_i16_max() { - assert_eq!(i16::MAX.to_int(), Some(i16::MAX as int)); + assert_eq!(i16::MAX.to_int(), Some(i16::MAX as isize)); assert_eq!(i16::MAX.to_i8(), None); assert_eq!(i16::MAX.to_i16(), Some(i16::MAX as i16)); assert_eq!(i16::MAX.to_i32(), Some(i16::MAX as i32)); assert_eq!(i16::MAX.to_i64(), Some(i16::MAX as i64)); - assert_eq!(i16::MAX.to_uint(), Some(i16::MAX as uint)); + assert_eq!(i16::MAX.to_uint(), Some(i16::MAX as usize)); assert_eq!(i16::MAX.to_u8(), None); assert_eq!(i16::MAX.to_u16(), Some(i16::MAX as u16)); assert_eq!(i16::MAX.to_u32(), Some(i16::MAX as u32)); @@ -1328,12 +1328,12 @@ mod tests { #[test] fn test_cast_range_i32_max() { - assert_eq!(i32::MAX.to_int(), Some(i32::MAX as int)); + assert_eq!(i32::MAX.to_int(), Some(i32::MAX as isize)); assert_eq!(i32::MAX.to_i8(), None); assert_eq!(i32::MAX.to_i16(), None); assert_eq!(i32::MAX.to_i32(), Some(i32::MAX as i32)); assert_eq!(i32::MAX.to_i64(), Some(i32::MAX as i64)); - assert_eq!(i32::MAX.to_uint(), Some(i32::MAX as uint)); + assert_eq!(i32::MAX.to_uint(), Some(i32::MAX as usize)); assert_eq!(i32::MAX.to_u8(), None); assert_eq!(i32::MAX.to_u16(), None); assert_eq!(i32::MAX.to_u32(), Some(i32::MAX as u32)); @@ -1361,8 +1361,8 @@ mod tests { #[cfg(target_pointer_width = "64")] fn check_word_size() { - assert_eq!(i64::MAX.to_int(), Some(i64::MAX as int)); - assert_eq!(i64::MAX.to_uint(), Some(i64::MAX as uint)); + assert_eq!(i64::MAX.to_int(), Some(i64::MAX as isize)); + assert_eq!(i64::MAX.to_uint(), Some(i64::MAX as usize)); } check_word_size(); @@ -1370,26 +1370,26 @@ mod tests { #[test] fn test_cast_range_uint_min() { - assert_eq!(uint::MIN.to_int(), Some(uint::MIN as int)); - assert_eq!(uint::MIN.to_i8(), Some(uint::MIN as i8)); - assert_eq!(uint::MIN.to_i16(), Some(uint::MIN as i16)); - assert_eq!(uint::MIN.to_i32(), Some(uint::MIN as i32)); - assert_eq!(uint::MIN.to_i64(), Some(uint::MIN as i64)); - assert_eq!(uint::MIN.to_uint(), Some(uint::MIN as uint)); - assert_eq!(uint::MIN.to_u8(), Some(uint::MIN as u8)); - assert_eq!(uint::MIN.to_u16(), Some(uint::MIN as u16)); - assert_eq!(uint::MIN.to_u32(), Some(uint::MIN as u32)); - assert_eq!(uint::MIN.to_u64(), Some(uint::MIN as u64)); + assert_eq!(usize::MIN.to_int(), Some(usize::MIN as isize)); + assert_eq!(usize::MIN.to_i8(), Some(usize::MIN as i8)); + assert_eq!(usize::MIN.to_i16(), Some(usize::MIN as i16)); + assert_eq!(usize::MIN.to_i32(), Some(usize::MIN as i32)); + assert_eq!(usize::MIN.to_i64(), Some(usize::MIN as i64)); + assert_eq!(usize::MIN.to_uint(), Some(usize::MIN as usize)); + assert_eq!(usize::MIN.to_u8(), Some(usize::MIN as u8)); + assert_eq!(usize::MIN.to_u16(), Some(usize::MIN as u16)); + assert_eq!(usize::MIN.to_u32(), Some(usize::MIN as u32)); + assert_eq!(usize::MIN.to_u64(), Some(usize::MIN as u64)); } #[test] fn test_cast_range_u8_min() { - assert_eq!(u8::MIN.to_int(), Some(u8::MIN as int)); + assert_eq!(u8::MIN.to_int(), Some(u8::MIN as isize)); assert_eq!(u8::MIN.to_i8(), Some(u8::MIN as i8)); assert_eq!(u8::MIN.to_i16(), Some(u8::MIN as i16)); assert_eq!(u8::MIN.to_i32(), Some(u8::MIN as i32)); assert_eq!(u8::MIN.to_i64(), Some(u8::MIN as i64)); - assert_eq!(u8::MIN.to_uint(), Some(u8::MIN as uint)); + assert_eq!(u8::MIN.to_uint(), Some(u8::MIN as usize)); assert_eq!(u8::MIN.to_u8(), Some(u8::MIN as u8)); assert_eq!(u8::MIN.to_u16(), Some(u8::MIN as u16)); assert_eq!(u8::MIN.to_u32(), Some(u8::MIN as u32)); @@ -1398,12 +1398,12 @@ mod tests { #[test] fn test_cast_range_u16_min() { - assert_eq!(u16::MIN.to_int(), Some(u16::MIN as int)); + assert_eq!(u16::MIN.to_int(), Some(u16::MIN as isize)); assert_eq!(u16::MIN.to_i8(), Some(u16::MIN as i8)); assert_eq!(u16::MIN.to_i16(), Some(u16::MIN as i16)); assert_eq!(u16::MIN.to_i32(), Some(u16::MIN as i32)); assert_eq!(u16::MIN.to_i64(), Some(u16::MIN as i64)); - assert_eq!(u16::MIN.to_uint(), Some(u16::MIN as uint)); + assert_eq!(u16::MIN.to_uint(), Some(u16::MIN as usize)); assert_eq!(u16::MIN.to_u8(), Some(u16::MIN as u8)); assert_eq!(u16::MIN.to_u16(), Some(u16::MIN as u16)); assert_eq!(u16::MIN.to_u32(), Some(u16::MIN as u32)); @@ -1412,12 +1412,12 @@ mod tests { #[test] fn test_cast_range_u32_min() { - assert_eq!(u32::MIN.to_int(), Some(u32::MIN as int)); + assert_eq!(u32::MIN.to_int(), Some(u32::MIN as isize)); assert_eq!(u32::MIN.to_i8(), Some(u32::MIN as i8)); assert_eq!(u32::MIN.to_i16(), Some(u32::MIN as i16)); assert_eq!(u32::MIN.to_i32(), Some(u32::MIN as i32)); assert_eq!(u32::MIN.to_i64(), Some(u32::MIN as i64)); - assert_eq!(u32::MIN.to_uint(), Some(u32::MIN as uint)); + assert_eq!(u32::MIN.to_uint(), Some(u32::MIN as usize)); assert_eq!(u32::MIN.to_u8(), Some(u32::MIN as u8)); assert_eq!(u32::MIN.to_u16(), Some(u32::MIN as u16)); assert_eq!(u32::MIN.to_u32(), Some(u32::MIN as u32)); @@ -1426,12 +1426,12 @@ mod tests { #[test] fn test_cast_range_u64_min() { - assert_eq!(u64::MIN.to_int(), Some(u64::MIN as int)); + assert_eq!(u64::MIN.to_int(), Some(u64::MIN as isize)); assert_eq!(u64::MIN.to_i8(), Some(u64::MIN as i8)); assert_eq!(u64::MIN.to_i16(), Some(u64::MIN as i16)); assert_eq!(u64::MIN.to_i32(), Some(u64::MIN as i32)); assert_eq!(u64::MIN.to_i64(), Some(u64::MIN as i64)); - assert_eq!(u64::MIN.to_uint(), Some(u64::MIN as uint)); + assert_eq!(u64::MIN.to_uint(), Some(u64::MIN as usize)); assert_eq!(u64::MIN.to_u8(), Some(u64::MIN as u8)); assert_eq!(u64::MIN.to_u16(), Some(u64::MIN as u16)); assert_eq!(u64::MIN.to_u32(), Some(u64::MIN as u32)); @@ -1440,26 +1440,26 @@ mod tests { #[test] fn test_cast_range_uint_max() { - assert_eq!(uint::MAX.to_int(), None); - assert_eq!(uint::MAX.to_i8(), None); - assert_eq!(uint::MAX.to_i16(), None); - assert_eq!(uint::MAX.to_i32(), None); - // uint::MAX.to_i64() is word-size specific - assert_eq!(uint::MAX.to_u8(), None); - assert_eq!(uint::MAX.to_u16(), None); - // uint::MAX.to_u32() is word-size specific - assert_eq!(uint::MAX.to_u64(), Some(uint::MAX as u64)); + assert_eq!(usize::MAX.to_int(), None); + assert_eq!(usize::MAX.to_i8(), None); + assert_eq!(usize::MAX.to_i16(), None); + assert_eq!(usize::MAX.to_i32(), None); + // usize::MAX.to_i64() is word-size specific + assert_eq!(usize::MAX.to_u8(), None); + assert_eq!(usize::MAX.to_u16(), None); + // usize::MAX.to_u32() is word-size specific + assert_eq!(usize::MAX.to_u64(), Some(usize::MAX as u64)); #[cfg(target_pointer_width = "32")] fn check_word_size() { - assert_eq!(uint::MAX.to_u32(), Some(uint::MAX as u32)); - assert_eq!(uint::MAX.to_i64(), Some(uint::MAX as i64)); + assert_eq!(usize::MAX.to_u32(), Some(usize::MAX as u32)); + assert_eq!(usize::MAX.to_i64(), Some(usize::MAX as i64)); } #[cfg(target_pointer_width = "64")] fn check_word_size() { - assert_eq!(uint::MAX.to_u32(), None); - assert_eq!(uint::MAX.to_i64(), None); + assert_eq!(usize::MAX.to_u32(), None); + assert_eq!(usize::MAX.to_i64(), None); } check_word_size(); @@ -1467,12 +1467,12 @@ mod tests { #[test] fn test_cast_range_u8_max() { - assert_eq!(u8::MAX.to_int(), Some(u8::MAX as int)); + assert_eq!(u8::MAX.to_int(), Some(u8::MAX as isize)); assert_eq!(u8::MAX.to_i8(), None); assert_eq!(u8::MAX.to_i16(), Some(u8::MAX as i16)); assert_eq!(u8::MAX.to_i32(), Some(u8::MAX as i32)); assert_eq!(u8::MAX.to_i64(), Some(u8::MAX as i64)); - assert_eq!(u8::MAX.to_uint(), Some(u8::MAX as uint)); + assert_eq!(u8::MAX.to_uint(), Some(u8::MAX as usize)); assert_eq!(u8::MAX.to_u8(), Some(u8::MAX as u8)); assert_eq!(u8::MAX.to_u16(), Some(u8::MAX as u16)); assert_eq!(u8::MAX.to_u32(), Some(u8::MAX as u32)); @@ -1481,12 +1481,12 @@ mod tests { #[test] fn test_cast_range_u16_max() { - assert_eq!(u16::MAX.to_int(), Some(u16::MAX as int)); + assert_eq!(u16::MAX.to_int(), Some(u16::MAX as isize)); assert_eq!(u16::MAX.to_i8(), None); assert_eq!(u16::MAX.to_i16(), None); assert_eq!(u16::MAX.to_i32(), Some(u16::MAX as i32)); assert_eq!(u16::MAX.to_i64(), Some(u16::MAX as i64)); - assert_eq!(u16::MAX.to_uint(), Some(u16::MAX as uint)); + assert_eq!(u16::MAX.to_uint(), Some(u16::MAX as usize)); assert_eq!(u16::MAX.to_u8(), None); assert_eq!(u16::MAX.to_u16(), Some(u16::MAX as u16)); assert_eq!(u16::MAX.to_u32(), Some(u16::MAX as u32)); @@ -1500,7 +1500,7 @@ mod tests { assert_eq!(u32::MAX.to_i16(), None); assert_eq!(u32::MAX.to_i32(), None); assert_eq!(u32::MAX.to_i64(), Some(u32::MAX as i64)); - assert_eq!(u32::MAX.to_uint(), Some(u32::MAX as uint)); + assert_eq!(u32::MAX.to_uint(), Some(u32::MAX as usize)); assert_eq!(u32::MAX.to_u8(), None); assert_eq!(u32::MAX.to_u16(), None); assert_eq!(u32::MAX.to_u32(), Some(u32::MAX as u32)); @@ -1513,7 +1513,7 @@ mod tests { #[cfg(target_pointer_width = "64")] fn check_word_size() { - assert_eq!(u32::MAX.to_int(), Some(u32::MAX as int)); + assert_eq!(u32::MAX.to_int(), Some(u32::MAX as isize)); } check_word_size(); @@ -1539,7 +1539,7 @@ mod tests { #[cfg(target_pointer_width = "64")] fn check_word_size() { - assert_eq!(u64::MAX.to_uint(), Some(u64::MAX as uint)); + assert_eq!(u64::MAX.to_uint(), Some(u64::MAX as usize)); } check_word_size(); @@ -1547,7 +1547,7 @@ mod tests { #[test] fn test_saturating_add_uint() { - use uint::MAX; + use usize::MAX; assert_eq!(3_usize.saturating_add(5_usize), 8_usize); assert_eq!(3_usize.saturating_add(MAX-1), MAX); assert_eq!(MAX.saturating_add(MAX), MAX); @@ -1556,7 +1556,7 @@ mod tests { #[test] fn test_saturating_sub_uint() { - use uint::MAX; + use usize::MAX; assert_eq!(5_usize.saturating_sub(3_usize), 2_usize); assert_eq!(3_usize.saturating_sub(5_usize), 0_usize); assert_eq!(0_usize.saturating_sub(1_usize), 0_usize); @@ -1565,7 +1565,7 @@ mod tests { #[test] fn test_saturating_add_int() { - use int::{MIN,MAX}; + use isize::{MIN,MAX}; assert_eq!(3.saturating_add(5), 8); assert_eq!(3.saturating_add(MAX-1), MAX); assert_eq!(MAX.saturating_add(MAX), MAX); @@ -1577,7 +1577,7 @@ mod tests { #[test] fn test_saturating_sub_int() { - use int::{MIN,MAX}; + use isize::{MIN,MAX}; assert_eq!(3.saturating_sub(5), -2); assert_eq!(MIN.saturating_sub(1), MIN); assert_eq!((-2).saturating_sub(MAX), MIN); @@ -1589,13 +1589,13 @@ mod tests { #[test] fn test_checked_add() { - let five_less = uint::MAX - 5; - assert_eq!(five_less.checked_add(0), Some(uint::MAX - 5)); - assert_eq!(five_less.checked_add(1), Some(uint::MAX - 4)); - assert_eq!(five_less.checked_add(2), Some(uint::MAX - 3)); - assert_eq!(five_less.checked_add(3), Some(uint::MAX - 2)); - assert_eq!(five_less.checked_add(4), Some(uint::MAX - 1)); - assert_eq!(five_less.checked_add(5), Some(uint::MAX)); + let five_less = usize::MAX - 5; + assert_eq!(five_less.checked_add(0), Some(usize::MAX - 5)); + assert_eq!(five_less.checked_add(1), Some(usize::MAX - 4)); + assert_eq!(five_less.checked_add(2), Some(usize::MAX - 3)); + assert_eq!(five_less.checked_add(3), Some(usize::MAX - 2)); + assert_eq!(five_less.checked_add(4), Some(usize::MAX - 1)); + assert_eq!(five_less.checked_add(5), Some(usize::MAX)); assert_eq!(five_less.checked_add(6), None); assert_eq!(five_less.checked_add(7), None); } @@ -1614,7 +1614,7 @@ mod tests { #[test] fn test_checked_mul() { - let third = uint::MAX / 3; + let third = usize::MAX / 3; assert_eq!(third.checked_mul(0), Some(0)); assert_eq!(third.checked_mul(1), Some(third)); assert_eq!(third.checked_mul(2), Some(third * 2)); @@ -1641,7 +1641,7 @@ mod tests { test_is_power_of_two!{ test_is_power_of_two_u16, u16 } test_is_power_of_two!{ test_is_power_of_two_u32, u32 } test_is_power_of_two!{ test_is_power_of_two_u64, u64 } - test_is_power_of_two!{ test_is_power_of_two_uint, uint } + test_is_power_of_two!{ test_is_power_of_two_uint, usize } macro_rules! test_next_power_of_two { ($test_name:ident, $T:ident) => ( @@ -1661,7 +1661,7 @@ mod tests { test_next_power_of_two! { test_next_power_of_two_u16, u16 } test_next_power_of_two! { test_next_power_of_two_u32, u32 } test_next_power_of_two! { test_next_power_of_two_u64, u64 } - test_next_power_of_two! { test_next_power_of_two_uint, uint } + test_next_power_of_two! { test_next_power_of_two_uint, usize } macro_rules! test_checked_next_power_of_two { ($test_name:ident, $T:ident) => ( @@ -1684,10 +1684,10 @@ mod tests { test_checked_next_power_of_two! { test_checked_next_power_of_two_u16, u16 } test_checked_next_power_of_two! { test_checked_next_power_of_two_u32, u32 } test_checked_next_power_of_two! { test_checked_next_power_of_two_u64, u64 } - test_checked_next_power_of_two! { test_checked_next_power_of_two_uint, uint } + test_checked_next_power_of_two! { test_checked_next_power_of_two_uint, usize } #[derive(PartialEq, Debug)] - struct Value { x: int } + struct Value { x: isize } impl ToPrimitive for Value { fn to_i64(&self) -> Option { self.x.to_i64() } @@ -1695,8 +1695,8 @@ mod tests { } impl FromPrimitive for Value { - fn from_i64(n: i64) -> Option { Some(Value { x: n as int }) } - fn from_u64(n: u64) -> Option { Some(Value { x: n as int }) } + fn from_i64(n: i64) -> Option { Some(Value { x: n as isize }) } + fn from_u64(n: u64) -> Option { Some(Value { x: n as isize }) } } #[test] @@ -1734,7 +1734,7 @@ mod tests { #[test] fn test_pow() { - fn naive_pow(base: T, exp: uint) -> T { + fn naive_pow(base: T, exp: usize) -> T { let one: T = Int::one(); (0..exp).fold(one, |acc, _| acc * base) } diff --git a/src/libstd/num/uint.rs b/src/libstd/num/uint.rs deleted file mode 100644 index c7b491381f337..0000000000000 --- a/src/libstd/num/uint.rs +++ /dev/null @@ -1,22 +0,0 @@ -// Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT -// file at the top-level directory of this distribution and at -// http://rust-lang.org/COPYRIGHT. -// -// Licensed under the Apache License, Version 2.0 or the MIT license -// , at your -// option. This file may not be copied, modified, or distributed -// except according to those terms. - -//! Deprecated: replaced by `usize`. -//! -//! The rollout of the new type will gradually take place over the -//! alpha cycle along with the development of clearer conventions -//! around integer types. - -#![unstable(feature = "std_misc")] -#![deprecated(since = "1.0.0", reason = "replaced by usize")] - -pub use core::uint::{BITS, BYTES, MIN, MAX}; - -uint_module! { uint } diff --git a/src/libstd/old_io/mod.rs b/src/libstd/old_io/mod.rs index fc3deb67f41ec..b82572fc08957 100644 --- a/src/libstd/old_io/mod.rs +++ b/src/libstd/old_io/mod.rs @@ -240,6 +240,8 @@ #![unstable(feature = "old_io")] #![deny(unused_must_use)] +#![allow(deprecated)] // seriously this is all deprecated +#![allow(unused_imports)] pub use self::SeekStyle::*; pub use self::FileMode::*; diff --git a/src/libstd/old_io/process.rs b/src/libstd/old_io/process.rs index c803cfbcb7d85..a13295b1ccb50 100644 --- a/src/libstd/old_io/process.rs +++ b/src/libstd/old_io/process.rs @@ -104,19 +104,7 @@ struct EnvKey(CString); #[derive(Eq, Clone, Debug)] struct EnvKey(CString); -#[cfg(all(windows, stage0))] -impl hash::Hash for EnvKey { - fn hash(&self, state: &mut H) { - let &EnvKey(ref x) = self; - match str::from_utf8(x.as_bytes()) { - Ok(s) => for ch in s.chars() { - (ch as u8 as char).to_lowercase().hash(state); - }, - Err(..) => x.hash(state) - } - } -} -#[cfg(all(windows, not(stage0)))] +#[cfg(windows)] impl hash::Hash for EnvKey { fn hash(&self, state: &mut H) { let &EnvKey(ref x) = self; diff --git a/src/libstd/old_path/mod.rs b/src/libstd/old_path/mod.rs index e9005aa22bcfb..4f8976fb2ecda 100644 --- a/src/libstd/old_path/mod.rs +++ b/src/libstd/old_path/mod.rs @@ -60,6 +60,8 @@ //! ``` #![unstable(feature = "old_path")] +#![allow(deprecated)] // seriously this is all deprecated +#![allow(unused_imports)] use core::marker::Sized; use ffi::CString; diff --git a/src/libstd/old_path/posix.rs b/src/libstd/old_path/posix.rs index 15eee9e4a0c02..8d5765e1ffe5e 100644 --- a/src/libstd/old_path/posix.rs +++ b/src/libstd/old_path/posix.rs @@ -100,14 +100,6 @@ impl FromStr for Path { #[derive(Debug, Clone, PartialEq, Copy)] pub struct ParsePathError; -#[cfg(stage0)] -impl hash::Hash for Path { - #[inline] - fn hash(&self, state: &mut S) { - self.repr.hash(state) - } -} -#[cfg(not(stage0))] #[stable(feature = "rust1", since = "1.0.0")] impl hash::Hash for Path { #[inline] diff --git a/src/libstd/old_path/windows.rs b/src/libstd/old_path/windows.rs index 887dc804c7af3..31a2be1daf359 100644 --- a/src/libstd/old_path/windows.rs +++ b/src/libstd/old_path/windows.rs @@ -127,21 +127,6 @@ impl FromStr for Path { #[derive(Debug, Clone, PartialEq, Copy)] pub struct ParsePathError; -#[cfg(stage0)] -impl hash::Hash for Path { - #[cfg(not(test))] - #[inline] - fn hash(&self, state: &mut S) { - self.repr.hash(state) - } - - #[cfg(test)] - #[inline] - fn hash(&self, _: &mut S) { - // No-op because the `hash` implementation will be wrong. - } -} -#[cfg(not(stage0))] #[stable(feature = "rust1", since = "1.0.0")] impl hash::Hash for Path { #[cfg(not(test))] diff --git a/src/libstd/os.rs b/src/libstd/os.rs index ebbfb8d42be00..86f5c2c356e5e 100644 --- a/src/libstd/os.rs +++ b/src/libstd/os.rs @@ -595,7 +595,7 @@ fn real_args_as_bytes() -> Vec> { // res #[cfg(target_os = "ios")] fn real_args_as_bytes() -> Vec> { - use ffi::c_str_to_bytes; + use ffi::CStr; use iter::range; use mem; @@ -630,7 +630,7 @@ fn real_args_as_bytes() -> Vec> { let tmp = objc_msgSend(args, objectAtSel, i); let utf_c_str: *const libc::c_char = mem::transmute(objc_msgSend(tmp, utf8Sel)); - res.push(c_str_to_bytes(&utf_c_str).to_vec()); + res.push(CStr::from_ptr(utf_c_str).to_bytes().to_vec()); } } diff --git a/src/libstd/path.rs b/src/libstd/path.rs index 49a5efec7c2e4..88543ad85ed11 100755 --- a/src/libstd/path.rs +++ b/src/libstd/path.rs @@ -1324,7 +1324,6 @@ impl AsPath for T { #[cfg(test)] mod tests { use super::*; - use ffi::OsStr; use core::prelude::*; use string::{ToString, String}; use vec::Vec; diff --git a/src/libstd/process.rs b/src/libstd/process.rs index 5baa095d35985..86604f62171ef 100644 --- a/src/libstd/process.rs +++ b/src/libstd/process.rs @@ -489,18 +489,14 @@ impl Child { mod tests { use io::ErrorKind; use io::prelude::*; - use prelude::v1::{Ok, Err, range, drop, Some, None, Vec}; + use prelude::v1::{Ok, Err, drop, Some, Vec}; use prelude::v1::{String, Clone}; use prelude::v1::{SliceExt, Str, StrExt, AsSlice, ToString, GenericPath}; - use path::Path; use old_path; use old_io::fs::PathExtensions; use rt::running_on_valgrind; use str; - use super::{Child, Command, Output, ExitStatus, Stdio}; - use sync::mpsc::channel; - use thread; - use time::Duration; + use super::{Command, Output, Stdio}; // FIXME(#10380) these tests should not all be ignored on android. diff --git a/src/libstd/sys/common/wtf8.rs b/src/libstd/sys/common/wtf8.rs index ca3ae1a7a3436..9119a3c60d855 100644 --- a/src/libstd/sys/common/wtf8.rs +++ b/src/libstd/sys/common/wtf8.rs @@ -32,7 +32,6 @@ use borrow::Cow; use cmp; use fmt; use hash::{Hash, Hasher}; -#[cfg(stage0)] use hash::Writer; use iter::{FromIterator, IntoIterator}; use mem; use num::Int; @@ -796,14 +795,6 @@ impl<'a> Iterator for EncodeWide<'a> { } } -#[cfg(stage0)] -impl Hash for CodePoint { - #[inline] - fn hash(&self, state: &mut S) { - self.value.hash(state) - } -} -#[cfg(not(stage0))] impl Hash for CodePoint { #[inline] fn hash(&self, state: &mut H) { @@ -811,15 +802,6 @@ impl Hash for CodePoint { } } -#[cfg(stage0)] -impl Hash for Wtf8Buf { - #[inline] - fn hash(&self, state: &mut S) { - state.write(&self.bytes); - 0xfeu8.hash(state) - } -} -#[cfg(not(stage0))] impl Hash for Wtf8Buf { #[inline] fn hash(&self, state: &mut H) { @@ -828,15 +810,6 @@ impl Hash for Wtf8Buf { } } -#[cfg(stage0)] -impl<'a, S: Writer + Hasher> Hash for Wtf8 { - #[inline] - fn hash(&self, state: &mut S) { - state.write(&self.bytes); - 0xfeu8.hash(state) - } -} -#[cfg(not(stage0))] impl Hash for Wtf8 { #[inline] fn hash(&self, state: &mut H) { diff --git a/src/libstd/sys/unix/os.rs b/src/libstd/sys/unix/os.rs index 1f82d9a3d79c7..d51f907307e46 100644 --- a/src/libstd/sys/unix/os.rs +++ b/src/libstd/sys/unix/os.rs @@ -220,7 +220,7 @@ pub fn current_exe() -> IoResult { if v.is_null() { Err(IoError::last_error()) } else { - Ok(Path::new(CStr::from_ptr(&v).to_bytes().to_vec())) + Ok(Path::new(CStr::from_ptr(v).to_bytes().to_vec())) } } } diff --git a/src/libstd/sys/unix/process.rs b/src/libstd/sys/unix/process.rs index 582fff2a26b7b..2be841989e6b3 100644 --- a/src/libstd/sys/unix/process.rs +++ b/src/libstd/sys/unix/process.rs @@ -12,8 +12,6 @@ use prelude::v1::*; use self::Req::*; use collections::HashMap; -#[cfg(stage0)] -use collections::hash_map::Hasher; use ffi::CString; use hash::Hash; use old_io::process::{ProcessExit, ExitStatus, ExitSignal}; @@ -64,223 +62,6 @@ impl Process { mkerr_libc(r) } - #[cfg(stage0)] - pub fn spawn(cfg: &C, in_fd: Option

, - out_fd: Option

, err_fd: Option

) - -> IoResult - where C: ProcessConfig, P: AsInner, - K: BytesContainer + Eq + Hash, V: BytesContainer - { - use libc::funcs::posix88::unistd::{fork, dup2, close, chdir, execvp}; - - mod rustrt { - extern { - pub fn rust_unset_sigprocmask(); - } - } - - #[cfg(all(target_os = "android", target_arch = "aarch64"))] - unsafe fn getdtablesize() -> c_int { - libc::sysconf(libc::consts::os::sysconf::_SC_OPEN_MAX) as c_int - } - #[cfg(not(all(target_os = "android", target_arch = "aarch64")))] - unsafe fn getdtablesize() -> c_int { - libc::funcs::bsd44::getdtablesize() - } - - unsafe fn set_cloexec(fd: c_int) { - let ret = c::ioctl(fd, c::FIOCLEX); - assert_eq!(ret, 0); - } - - let dirp = cfg.cwd().map(|c| c.as_ptr()).unwrap_or(ptr::null()); - - // temporary until unboxed closures land - let cfg = unsafe { - mem::transmute::<&ProcessConfig,&'static ProcessConfig>(cfg) - }; - - with_envp(cfg.env(), move|envp: *const c_void| { - with_argv(cfg.program(), cfg.args(), move|argv: *const *const libc::c_char| unsafe { - let (input, mut output) = try!(sys::os::pipe()); - - // We may use this in the child, so perform allocations before the - // fork - let devnull = b"/dev/null\0"; - - set_cloexec(output.fd()); - - let pid = fork(); - if pid < 0 { - return Err(super::last_error()) - } else if pid > 0 { - #[inline] - fn combine(arr: &[u8]) -> i32 { - let a = arr[0] as u32; - let b = arr[1] as u32; - let c = arr[2] as u32; - let d = arr[3] as u32; - - ((a << 24) | (b << 16) | (c << 8) | (d << 0)) as i32 - } - - let p = Process{ pid: pid }; - drop(output); - let mut bytes = [0; 8]; - return match input.read(&mut bytes) { - Ok(8) => { - assert!(combine(CLOEXEC_MSG_FOOTER) == combine(&bytes[4.. 8]), - "Validation on the CLOEXEC pipe failed: {:?}", bytes); - let errno = combine(&bytes[0.. 4]); - assert!(p.wait(0).is_ok(), "wait(0) should either return Ok or panic"); - Err(super::decode_error(errno)) - } - Err(ref e) if e.kind == EndOfFile => Ok(p), - Err(e) => { - assert!(p.wait(0).is_ok(), "wait(0) should either return Ok or panic"); - panic!("the CLOEXEC pipe failed: {:?}", e) - }, - Ok(..) => { // pipe I/O up to PIPE_BUF bytes should be atomic - assert!(p.wait(0).is_ok(), "wait(0) should either return Ok or panic"); - panic!("short read on the CLOEXEC pipe") - } - }; - } - - // And at this point we've reached a special time in the life of the - // child. The child must now be considered hamstrung and unable to - // do anything other than syscalls really. Consider the following - // scenario: - // - // 1. Thread A of process 1 grabs the malloc() mutex - // 2. Thread B of process 1 forks(), creating thread C - // 3. Thread C of process 2 then attempts to malloc() - // 4. The memory of process 2 is the same as the memory of - // process 1, so the mutex is locked. - // - // This situation looks a lot like deadlock, right? It turns out - // that this is what pthread_atfork() takes care of, which is - // presumably implemented across platforms. The first thing that - // threads to *before* forking is to do things like grab the malloc - // mutex, and then after the fork they unlock it. - // - // Despite this information, libnative's spawn has been witnessed to - // deadlock on both OSX and FreeBSD. I'm not entirely sure why, but - // all collected backtraces point at malloc/free traffic in the - // child spawned process. - // - // For this reason, the block of code below should contain 0 - // invocations of either malloc of free (or their related friends). - // - // As an example of not having malloc/free traffic, we don't close - // this file descriptor by dropping the FileDesc (which contains an - // allocation). Instead we just close it manually. This will never - // have the drop glue anyway because this code never returns (the - // child will either exec() or invoke libc::exit) - let _ = libc::close(input.fd()); - - fn fail(output: &mut FileDesc) -> ! { - let errno = sys::os::errno() as u32; - let bytes = [ - (errno >> 24) as u8, - (errno >> 16) as u8, - (errno >> 8) as u8, - (errno >> 0) as u8, - CLOEXEC_MSG_FOOTER[0], CLOEXEC_MSG_FOOTER[1], - CLOEXEC_MSG_FOOTER[2], CLOEXEC_MSG_FOOTER[3] - ]; - // pipe I/O up to PIPE_BUF bytes should be atomic - assert!(output.write(&bytes).is_ok()); - unsafe { libc::_exit(1) } - } - - rustrt::rust_unset_sigprocmask(); - - // If a stdio file descriptor is set to be ignored (via a -1 file - // descriptor), then we don't actually close it, but rather open - // up /dev/null into that file descriptor. Otherwise, the first file - // descriptor opened up in the child would be numbered as one of the - // stdio file descriptors, which is likely to wreak havoc. - let setup = |src: Option

, dst: c_int| { - let src = match src { - None => { - let flags = if dst == libc::STDIN_FILENO { - libc::O_RDONLY - } else { - libc::O_RDWR - }; - libc::open(devnull.as_ptr() as *const _, flags, 0) - } - Some(obj) => { - let fd = obj.as_inner().fd(); - // Leak the memory and the file descriptor. We're in the - // child now an all our resources are going to be - // cleaned up very soon - mem::forget(obj); - fd - } - }; - src != -1 && retry(|| dup2(src, dst)) != -1 - }; - - if !setup(in_fd, libc::STDIN_FILENO) { fail(&mut output) } - if !setup(out_fd, libc::STDOUT_FILENO) { fail(&mut output) } - if !setup(err_fd, libc::STDERR_FILENO) { fail(&mut output) } - - // close all other fds - for fd in (3..getdtablesize()).rev() { - if fd != output.fd() { - let _ = close(fd as c_int); - } - } - - match cfg.gid() { - Some(u) => { - if libc::setgid(u as libc::gid_t) != 0 { - fail(&mut output); - } - } - None => {} - } - match cfg.uid() { - Some(u) => { - // When dropping privileges from root, the `setgroups` call - // will remove any extraneous groups. If we don't call this, - // then even though our uid has dropped, we may still have - // groups that enable us to do super-user things. This will - // fail if we aren't root, so don't bother checking the - // return value, this is just done as an optimistic - // privilege dropping function. - extern { - fn setgroups(ngroups: libc::c_int, - ptr: *const libc::c_void) -> libc::c_int; - } - let _ = setgroups(0, ptr::null()); - - if libc::setuid(u as libc::uid_t) != 0 { - fail(&mut output); - } - } - None => {} - } - if cfg.detach() { - // Don't check the error of setsid because it fails if we're the - // process leader already. We just forked so it shouldn't return - // error, but ignore it anyway. - let _ = libc::setsid(); - } - if !dirp.is_null() && chdir(dirp) == -1 { - fail(&mut output); - } - if !envp.is_null() { - *sys::os::environ() = envp as *const _; - } - let _ = execvp(*argv, argv as *mut _); - fail(&mut output); - }) - }) - } - #[cfg(not(stage0))] pub fn spawn(cfg: &C, in_fd: Option

, out_fd: Option

, err_fd: Option

) -> IoResult @@ -766,45 +547,6 @@ fn with_argv(prog: &CString, args: &[CString], cb(ptrs.as_ptr()) } -#[cfg(stage0)] -fn with_envp(env: Option<&HashMap>, - cb: F) - -> T - where F : FnOnce(*const c_void) -> T, - K : BytesContainer + Eq + Hash, - V : BytesContainer -{ - // On posixy systems we can pass a char** for envp, which is a - // null-terminated array of "k=v\0" strings. Since we must create - // these strings locally, yet expose a raw pointer to them, we - // create a temporary vector to own the CStrings that outlives the - // call to cb. - match env { - Some(env) => { - let mut tmps = Vec::with_capacity(env.len()); - - for pair in env { - let mut kv = Vec::new(); - kv.push_all(pair.0.container_as_bytes()); - kv.push('=' as u8); - kv.push_all(pair.1.container_as_bytes()); - kv.push(0); // terminating null - tmps.push(kv); - } - - // As with `with_argv`, this is unsafe, since cb could leak the pointers. - let mut ptrs: Vec<*const libc::c_char> = - tmps.iter() - .map(|tmp| tmp.as_ptr() as *const libc::c_char) - .collect(); - ptrs.push(ptr::null()); - - cb(ptrs.as_ptr() as *const c_void) - } - _ => cb(ptr::null()) - } -} -#[cfg(not(stage0))] fn with_envp(env: Option<&HashMap>, cb: F) -> T diff --git a/src/libstd/sys/windows/process.rs b/src/libstd/sys/windows/process.rs index 60d24e6174fd7..334cafd3eb113 100644 --- a/src/libstd/sys/windows/process.rs +++ b/src/libstd/sys/windows/process.rs @@ -10,7 +10,6 @@ use prelude::v1::*; -#[cfg(stage0)] use collections::hash_map::Hasher; use collections; use env; use ffi::CString; @@ -106,170 +105,6 @@ impl Process { } #[allow(deprecated)] - #[cfg(stage0)] - pub fn spawn(cfg: &C, in_fd: Option

, - out_fd: Option

, err_fd: Option

) - -> IoResult - where C: ProcessConfig, P: AsInner, - K: BytesContainer + Eq + Hash, V: BytesContainer - { - use libc::types::os::arch::extra::{DWORD, HANDLE, STARTUPINFO}; - use libc::consts::os::extra::{ - TRUE, FALSE, - STARTF_USESTDHANDLES, - INVALID_HANDLE_VALUE, - DUPLICATE_SAME_ACCESS - }; - use libc::funcs::extra::kernel32::{ - GetCurrentProcess, - DuplicateHandle, - CloseHandle, - CreateProcessW - }; - use libc::funcs::extra::msvcrt::get_osfhandle; - - use mem; - use iter::IteratorExt; - use str::StrExt; - - if cfg.gid().is_some() || cfg.uid().is_some() { - return Err(IoError { - kind: old_io::IoUnavailable, - desc: "unsupported gid/uid requested on windows", - detail: None, - }) - } - - // To have the spawning semantics of unix/windows stay the same, we need to - // read the *child's* PATH if one is provided. See #15149 for more details. - let program = cfg.env().and_then(|env| { - for (key, v) in env { - if b"PATH" != key.container_as_bytes() { continue } - - // Split the value and test each path to see if the - // program exists. - for path in os::split_paths(v.container_as_bytes()) { - let path = path.join(cfg.program().as_bytes()) - .with_extension(env::consts::EXE_EXTENSION); - if path.exists() { - return Some(CString::from_slice(path.as_vec())) - } - } - break - } - None - }); - - unsafe { - let mut si = zeroed_startupinfo(); - si.cb = mem::size_of::() as DWORD; - si.dwFlags = STARTF_USESTDHANDLES; - - let cur_proc = GetCurrentProcess(); - - // Similarly to unix, we don't actually leave holes for the stdio file - // descriptors, but rather open up /dev/null equivalents. These - // equivalents are drawn from libuv's windows process spawning. - let set_fd = |fd: &Option

, slot: &mut HANDLE, - is_stdin: bool| { - match *fd { - None => { - let access = if is_stdin { - libc::FILE_GENERIC_READ - } else { - libc::FILE_GENERIC_WRITE | libc::FILE_READ_ATTRIBUTES - }; - let size = mem::size_of::(); - let mut sa = libc::SECURITY_ATTRIBUTES { - nLength: size as libc::DWORD, - lpSecurityDescriptor: ptr::null_mut(), - bInheritHandle: 1, - }; - let mut filename: Vec = "NUL".utf16_units().collect(); - filename.push(0); - *slot = libc::CreateFileW(filename.as_ptr(), - access, - libc::FILE_SHARE_READ | - libc::FILE_SHARE_WRITE, - &mut sa, - libc::OPEN_EXISTING, - 0, - ptr::null_mut()); - if *slot == INVALID_HANDLE_VALUE { - return Err(super::last_error()) - } - } - Some(ref fd) => { - let orig = get_osfhandle(fd.as_inner().fd()) as HANDLE; - if orig == INVALID_HANDLE_VALUE { - return Err(super::last_error()) - } - if DuplicateHandle(cur_proc, orig, cur_proc, slot, - 0, TRUE, DUPLICATE_SAME_ACCESS) == FALSE { - return Err(super::last_error()) - } - } - } - Ok(()) - }; - - try!(set_fd(&in_fd, &mut si.hStdInput, true)); - try!(set_fd(&out_fd, &mut si.hStdOutput, false)); - try!(set_fd(&err_fd, &mut si.hStdError, false)); - - let cmd_str = make_command_line(program.as_ref().unwrap_or(cfg.program()), - cfg.args()); - let mut pi = zeroed_process_information(); - let mut create_err = None; - - // stolen from the libuv code. - let mut flags = libc::CREATE_UNICODE_ENVIRONMENT; - if cfg.detach() { - flags |= libc::DETACHED_PROCESS | libc::CREATE_NEW_PROCESS_GROUP; - } - - with_envp(cfg.env(), |envp| { - with_dirp(cfg.cwd(), |dirp| { - let mut cmd_str: Vec = cmd_str.utf16_units().collect(); - cmd_str.push(0); - let _lock = CREATE_PROCESS_LOCK.lock().unwrap(); - let created = CreateProcessW(ptr::null(), - cmd_str.as_mut_ptr(), - ptr::null_mut(), - ptr::null_mut(), - TRUE, - flags, envp, dirp, - &mut si, &mut pi); - if created == FALSE { - create_err = Some(super::last_error()); - } - }) - }); - - assert!(CloseHandle(si.hStdInput) != 0); - assert!(CloseHandle(si.hStdOutput) != 0); - assert!(CloseHandle(si.hStdError) != 0); - - match create_err { - Some(err) => return Err(err), - None => {} - } - - // We close the thread handle because we don't care about keeping the - // thread id valid, and we aren't keeping the thread handle around to be - // able to close it later. We don't close the process handle however - // because std::we want the process id to stay valid at least until the - // calling code closes the process handle. - assert!(CloseHandle(pi.hThread) != 0); - - Ok(Process { - pid: pi.dwProcessId as pid_t, - handle: pi.hProcess as *mut () - }) - } - } - #[allow(deprecated)] - #[cfg(not(stage0))] pub fn spawn(cfg: &C, in_fd: Option

, out_fd: Option

, err_fd: Option

) -> IoResult @@ -589,35 +424,6 @@ fn make_command_line(prog: &CString, args: &[CString]) -> String { } } -#[cfg(stage0)] -fn with_envp(env: Option<&collections::HashMap>, cb: F) -> T - where K: BytesContainer + Eq + Hash, - V: BytesContainer, - F: FnOnce(*mut c_void) -> T, -{ - // On Windows we pass an "environment block" which is not a char**, but - // rather a concatenation of null-terminated k=v\0 sequences, with a final - // \0 to terminate. - match env { - Some(env) => { - let mut blk = Vec::new(); - - for pair in env { - let kv = format!("{}={}", - pair.0.container_as_str().unwrap(), - pair.1.container_as_str().unwrap()); - blk.extend(kv.utf16_units()); - blk.push(0); - } - - blk.push(0); - - cb(blk.as_mut_ptr() as *mut c_void) - } - _ => cb(ptr::null_mut()) - } -} -#[cfg(not(stage0))] fn with_envp(env: Option<&collections::HashMap>, cb: F) -> T where K: BytesContainer + Eq + Hash, V: BytesContainer, diff --git a/src/libstd/thread.rs b/src/libstd/thread.rs index 3653e7e31d5c6..1f70e1526a096 100644 --- a/src/libstd/thread.rs +++ b/src/libstd/thread.rs @@ -465,16 +465,16 @@ impl Thread { } } - /// Deprecated: use module-level free fucntion. - #[deprecated(since = "1.0.0", reason = "use module-level free fucntion")] + /// Deprecated: use module-level free function. + #[deprecated(since = "1.0.0", reason = "use module-level free function")] #[unstable(feature = "std_misc", reason = "may change with specifics of new Send semantics")] pub fn spawn(f: F) -> Thread where F: FnOnce(), F: Send + 'static { Builder::new().spawn(f).unwrap().thread().clone() } - /// Deprecated: use module-level free fucntion. - #[deprecated(since = "1.0.0", reason = "use module-level free fucntion")] + /// Deprecated: use module-level free function. + #[deprecated(since = "1.0.0", reason = "use module-level free function")] #[unstable(feature = "std_misc", reason = "may change with specifics of new Send semantics")] pub fn scoped<'a, T, F>(f: F) -> JoinGuard<'a, T> where @@ -483,30 +483,30 @@ impl Thread { Builder::new().scoped(f).unwrap() } - /// Deprecated: use module-level free fucntion. - #[deprecated(since = "1.0.0", reason = "use module-level free fucntion")] + /// Deprecated: use module-level free function. + #[deprecated(since = "1.0.0", reason = "use module-level free function")] #[stable(feature = "rust1", since = "1.0.0")] pub fn current() -> Thread { thread_info::current_thread() } - /// Deprecated: use module-level free fucntion. - #[deprecated(since = "1.0.0", reason = "use module-level free fucntion")] + /// Deprecated: use module-level free function. + #[deprecated(since = "1.0.0", reason = "use module-level free function")] #[unstable(feature = "std_misc", reason = "name may change")] pub fn yield_now() { unsafe { imp::yield_now() } } - /// Deprecated: use module-level free fucntion. - #[deprecated(since = "1.0.0", reason = "use module-level free fucntion")] + /// Deprecated: use module-level free function. + #[deprecated(since = "1.0.0", reason = "use module-level free function")] #[inline] #[stable(feature = "rust1", since = "1.0.0")] pub fn panicking() -> bool { unwind::panicking() } - /// Deprecated: use module-level free fucntion. - #[deprecated(since = "1.0.0", reason = "use module-level free fucntion")] + /// Deprecated: use module-level free function. + #[deprecated(since = "1.0.0", reason = "use module-level free function")] #[unstable(feature = "std_misc", reason = "recently introduced")] pub fn park() { let thread = current(); @@ -517,8 +517,8 @@ impl Thread { *guard = false; } - /// Deprecated: use module-level free fucntion. - #[deprecated(since = "1.0.0", reason = "use module-level free fucntion")] + /// Deprecated: use module-level free function. + #[deprecated(since = "1.0.0", reason = "use module-level free function")] #[unstable(feature = "std_misc", reason = "recently introduced")] pub fn park_timeout(dur: Duration) { let thread = current(); @@ -702,7 +702,7 @@ mod test { use boxed::BoxAny; use result; use std::old_io::{ChanReader, ChanWriter}; - use super::{Thread, Builder}; + use super::{Builder}; use thread; use thunk::Thunk; use time::Duration; @@ -767,7 +767,7 @@ mod test { #[test] #[should_fail] fn test_scoped_implicit_panic() { - thread::scoped(|| panic!()); + let _ = thread::scoped(|| panic!()); } #[test] diff --git a/src/libstd/thunk.rs b/src/libstd/thunk.rs index fe39954f0d446..5bede984f13c7 100644 --- a/src/libstd/thunk.rs +++ b/src/libstd/thunk.rs @@ -17,9 +17,6 @@ use core::marker::Send; use core::ops::FnOnce; pub struct Thunk<'a, A=(),R=()> { - #[cfg(stage0)] - invoke: Box+Send>, - #[cfg(not(stage0))] invoke: Box+Send + 'a>, } diff --git a/src/libsyntax/ext/asm.rs b/src/libsyntax/ext/asm.rs index d8cba139fb597..009bfef86230f 100644 --- a/src/libsyntax/ext/asm.rs +++ b/src/libsyntax/ext/asm.rs @@ -113,7 +113,7 @@ pub fn expand_asm<'cx>(cx: &'cx mut ExtCtxt, sp: Span, tts: &[ast::TokenTree]) Some(('=', _)) => None, Some(('+', operand)) => { Some(token::intern_and_get_ident(&format!( - "={}", operand)[])) + "={}", operand))) } _ => { cx.span_err(span, "output operand constraint lacks '=' or '+'"); diff --git a/src/libsyntax/ext/base.rs b/src/libsyntax/ext/base.rs index d4ccabbd63b4a..2ef90f04f7527 100644 --- a/src/libsyntax/ext/base.rs +++ b/src/libsyntax/ext/base.rs @@ -83,15 +83,15 @@ pub enum Annotatable { impl Annotatable { pub fn attrs(&self) -> &[ast::Attribute] { match *self { - Annotatable::Item(ref i) => &i.attrs[], + Annotatable::Item(ref i) => &i.attrs, Annotatable::TraitItem(ref i) => match *i { - ast::TraitItem::RequiredMethod(ref tm) => &tm.attrs[], - ast::TraitItem::ProvidedMethod(ref m) => &m.attrs[], - ast::TraitItem::TypeTraitItem(ref at) => &at.attrs[], + ast::TraitItem::RequiredMethod(ref tm) => &tm.attrs, + ast::TraitItem::ProvidedMethod(ref m) => &m.attrs, + ast::TraitItem::TypeTraitItem(ref at) => &at.attrs, }, Annotatable::ImplItem(ref i) => match *i { - ast::ImplItem::MethodImplItem(ref m) => &m.attrs[], - ast::ImplItem::TypeImplItem(ref t) => &t.attrs[], + ast::ImplItem::MethodImplItem(ref m) => &m.attrs, + ast::ImplItem::TypeImplItem(ref t) => &t.attrs, } } } @@ -639,7 +639,7 @@ impl<'a> ExtCtxt<'a> { pub fn mod_pop(&mut self) { self.mod_path.pop().unwrap(); } pub fn mod_path(&self) -> Vec { let mut v = Vec::new(); - v.push(token::str_to_ident(&self.ecfg.crate_name[])); + v.push(token::str_to_ident(&self.ecfg.crate_name)); v.extend(self.mod_path.iter().cloned()); return v; } @@ -648,7 +648,7 @@ impl<'a> ExtCtxt<'a> { if self.recursion_count > self.ecfg.recursion_limit { self.span_fatal(ei.call_site, &format!("recursion limit reached while expanding the macro `{}`", - ei.callee.name)[]); + ei.callee.name)); } let mut call_site = ei.call_site; @@ -773,7 +773,7 @@ pub fn check_zero_tts(cx: &ExtCtxt, tts: &[ast::TokenTree], name: &str) { if tts.len() != 0 { - cx.span_err(sp, &format!("{} takes no arguments", name)[]); + cx.span_err(sp, &format!("{} takes no arguments", name)); } } @@ -786,12 +786,12 @@ pub fn get_single_str_from_tts(cx: &mut ExtCtxt, -> Option { let mut p = cx.new_parser_from_tts(tts); if p.token == token::Eof { - cx.span_err(sp, &format!("{} takes 1 argument", name)[]); + cx.span_err(sp, &format!("{} takes 1 argument", name)); return None } let ret = cx.expander().fold_expr(p.parse_expr()); if p.token != token::Eof { - cx.span_err(sp, &format!("{} takes 1 argument", name)[]); + cx.span_err(sp, &format!("{} takes 1 argument", name)); } expr_to_string(cx, ret, "argument must be a string literal").map(|(s, _)| { s.to_string() diff --git a/src/libsyntax/ext/build.rs b/src/libsyntax/ext/build.rs index 5bfd4a9f6111c..8923290d655c8 100644 --- a/src/libsyntax/ext/build.rs +++ b/src/libsyntax/ext/build.rs @@ -762,7 +762,7 @@ impl<'a> AstBuilder for ExtCtxt<'a> { fn expr_fail(&self, span: Span, msg: InternedString) -> P { let loc = self.codemap().lookup_char_pos(span.lo); let expr_file = self.expr_str(span, - token::intern_and_get_ident(&loc.file.name[])); + token::intern_and_get_ident(&loc.file.name)); let expr_line = self.expr_usize(span, loc.line); let expr_file_line_tuple = self.expr_tuple(span, vec!(expr_file, expr_line)); let expr_file_line_ptr = self.expr_addr_of(span, expr_file_line_tuple); diff --git a/src/libsyntax/ext/concat.rs b/src/libsyntax/ext/concat.rs index 38098e50dee83..84f786e9780f0 100644 --- a/src/libsyntax/ext/concat.rs +++ b/src/libsyntax/ext/concat.rs @@ -40,14 +40,14 @@ pub fn expand_syntax_ext(cx: &mut base::ExtCtxt, ast::LitInt(i, ast::UnsignedIntLit(_)) | ast::LitInt(i, ast::SignedIntLit(_, ast::Plus)) | ast::LitInt(i, ast::UnsuffixedIntLit(ast::Plus)) => { - accumulator.push_str(&format!("{}", i)[]); + accumulator.push_str(&format!("{}", i)); } ast::LitInt(i, ast::SignedIntLit(_, ast::Minus)) | ast::LitInt(i, ast::UnsuffixedIntLit(ast::Minus)) => { - accumulator.push_str(&format!("-{}", i)[]); + accumulator.push_str(&format!("-{}", i)); } ast::LitBool(b) => { - accumulator.push_str(&format!("{}", b)[]); + accumulator.push_str(&format!("{}", b)); } ast::LitByte(..) | ast::LitBinary(..) => { diff --git a/src/libsyntax/ext/deriving/clone.rs b/src/libsyntax/ext/deriving/clone.rs index 518fbcc80ee95..5f460264216a1 100644 --- a/src/libsyntax/ext/deriving/clone.rs +++ b/src/libsyntax/ext/deriving/clone.rs @@ -81,11 +81,11 @@ fn cs_clone( EnumNonMatchingCollapsed (..) => { cx.span_bug(trait_span, &format!("non-matching enum variants in \ - `derive({})`", name)[]) + `derive({})`", name)) } StaticEnum(..) | StaticStruct(..) => { cx.span_bug(trait_span, - &format!("static method in `derive({})`", name)[]) + &format!("static method in `derive({})`", name)) } } @@ -102,7 +102,7 @@ fn cs_clone( None => { cx.span_bug(trait_span, &format!("unnamed field in normal struct in \ - `derive({})`", name)[]) + `derive({})`", name)) } }; cx.field_imm(field.span, ident, subcall(field)) diff --git a/src/libsyntax/ext/deriving/decodable.rs b/src/libsyntax/ext/deriving/decodable.rs index ab0f64e823f9c..f27bbc338e570 100644 --- a/src/libsyntax/ext/deriving/decodable.rs +++ b/src/libsyntax/ext/deriving/decodable.rs @@ -204,7 +204,7 @@ fn decode_static_fields(cx: &mut ExtCtxt, } else { let fields = fields.iter().enumerate().map(|(i, &span)| { getarg(cx, span, - token::intern_and_get_ident(&format!("_field{}", i)[]), + token::intern_and_get_ident(&format!("_field{}", i)), i) }).collect(); diff --git a/src/libsyntax/ext/deriving/encodable.rs b/src/libsyntax/ext/deriving/encodable.rs index dd6094705995e..8038074cee14f 100644 --- a/src/libsyntax/ext/deriving/encodable.rs +++ b/src/libsyntax/ext/deriving/encodable.rs @@ -191,7 +191,7 @@ fn encodable_substructure(cx: &mut ExtCtxt, trait_span: Span, let name = match name { Some(id) => token::get_ident(id), None => { - token::intern_and_get_ident(&format!("_field{}", i)[]) + token::intern_and_get_ident(&format!("_field{}", i)) } }; let enc = cx.expr_method_call(span, self_.clone(), diff --git a/src/libsyntax/ext/deriving/generic/mod.rs b/src/libsyntax/ext/deriving/generic/mod.rs index b912ed34ae0ad..36bd8d39a8363 100644 --- a/src/libsyntax/ext/deriving/generic/mod.rs +++ b/src/libsyntax/ext/deriving/generic/mod.rs @@ -363,7 +363,7 @@ impl<'a> TraitDef<'a> { // generated implementations are linted let mut attrs = newitem.attrs.clone(); attrs.extend(item.attrs.iter().filter(|a| { - match &a.name()[] { + match &a.name()[..] { "allow" | "warn" | "deny" | "forbid" => true, _ => false, } @@ -671,7 +671,7 @@ impl<'a> MethodDef<'a> { for (i, ty) in self.args.iter().enumerate() { let ast_ty = ty.to_ty(cx, trait_.span, type_ident, generics); - let ident = cx.ident_of(&format!("__arg_{}", i)[]); + let ident = cx.ident_of(&format!("__arg_{}", i)); arg_tys.push((ident, ast_ty)); let arg_expr = cx.expr_ident(trait_.span, ident); @@ -778,7 +778,7 @@ impl<'a> MethodDef<'a> { struct_path, struct_def, &format!("__self_{}", - i)[], + i), ast::MutImmutable); patterns.push(pat); raw_fields.push(ident_expr); @@ -971,7 +971,7 @@ impl<'a> MethodDef<'a> { let mut subpats = Vec::with_capacity(self_arg_names.len()); let mut self_pats_idents = Vec::with_capacity(self_arg_names.len() - 1); let first_self_pat_idents = { - let (p, idents) = mk_self_pat(cx, &self_arg_names[0][]); + let (p, idents) = mk_self_pat(cx, &self_arg_names[0]); subpats.push(p); idents }; @@ -1289,7 +1289,7 @@ impl<'a> TraitDef<'a> { cx.span_bug(sp, "a struct with named and unnamed fields in `derive`"); } }; - let ident = cx.ident_of(&format!("{}_{}", prefix, i)[]); + let ident = cx.ident_of(&format!("{}_{}", prefix, i)); paths.push(codemap::Spanned{span: sp, node: ident}); let val = cx.expr( sp, ast::ExprParen(cx.expr_deref(sp, cx.expr_path(cx.path_ident(sp,ident))))); @@ -1335,7 +1335,7 @@ impl<'a> TraitDef<'a> { let mut ident_expr = Vec::new(); for (i, va) in variant_args.iter().enumerate() { let sp = self.set_expn_info(cx, va.ty.span); - let ident = cx.ident_of(&format!("{}_{}", prefix, i)[]); + let ident = cx.ident_of(&format!("{}_{}", prefix, i)); let path1 = codemap::Spanned{span: sp, node: ident}; paths.push(path1); let expr_path = cx.expr_path(cx.path_ident(sp, ident)); @@ -1378,7 +1378,7 @@ pub fn cs_fold(use_foldl: bool, field.span, old, field.self_.clone(), - &field.other[]) + &field.other) }) } else { all_fields.iter().rev().fold(base, |old, field| { @@ -1386,7 +1386,7 @@ pub fn cs_fold(use_foldl: bool, field.span, old, field.self_.clone(), - &field.other[]) + &field.other) }) } }, diff --git a/src/libsyntax/ext/deriving/mod.rs b/src/libsyntax/ext/deriving/mod.rs index f8bc331bfcfe7..eee780f457c9f 100644 --- a/src/libsyntax/ext/deriving/mod.rs +++ b/src/libsyntax/ext/deriving/mod.rs @@ -157,7 +157,7 @@ pub fn expand_meta_derive(cx: &mut ExtCtxt, cx.span_err(titem.span, &format!("unknown `derive` \ trait: `{}`", - *tname)[]); + *tname)); } }; } diff --git a/src/libsyntax/ext/env.rs b/src/libsyntax/ext/env.rs index 9c04d1e928295..93f8ee5042bb1 100644 --- a/src/libsyntax/ext/env.rs +++ b/src/libsyntax/ext/env.rs @@ -83,7 +83,7 @@ pub fn expand_env<'cx>(cx: &'cx mut ExtCtxt, sp: Span, tts: &[ast::TokenTree]) None => { token::intern_and_get_ident(&format!("environment variable `{}` \ not defined", - var)[]) + var)) } Some(second) => { match expr_to_string(cx, second, "expected string literal") { diff --git a/src/libsyntax/ext/expand.rs b/src/libsyntax/ext/expand.rs index d4dda7390a52f..bc239d0c7c269 100644 --- a/src/libsyntax/ext/expand.rs +++ b/src/libsyntax/ext/expand.rs @@ -389,7 +389,7 @@ fn expand_mac_invoc(mac: ast::Mac, span: codemap::Span, fld.cx.span_err( pth.span, &format!("macro undefined: '{}!'", - &extnamestr)[]); + &extnamestr)); // let compilation continue None @@ -426,7 +426,7 @@ fn expand_mac_invoc(mac: ast::Mac, span: codemap::Span, pth.span, &format!("non-expression macro in expression position: {}", &extnamestr[..] - )[]); + )); return None; } }; @@ -436,7 +436,7 @@ fn expand_mac_invoc(mac: ast::Mac, span: codemap::Span, fld.cx.span_err( pth.span, &format!("'{}' is not a tt-style macro", - &extnamestr)[]); + &extnamestr)); None } } @@ -608,7 +608,7 @@ pub fn expand_item_mac(it: P, None => { fld.cx.span_err(path_span, &format!("macro undefined: '{}!'", - extnamestr)[]); + extnamestr)); // let compilation continue return SmallVector::zero(); } @@ -618,10 +618,9 @@ pub fn expand_item_mac(it: P, if it.ident.name != parse::token::special_idents::invalid.name { fld.cx .span_err(path_span, - &format!("macro {}! expects no ident argument, \ - given '{}'", - extnamestr, - token::get_ident(it.ident))[]); + &format!("macro {}! expects no ident argument, given '{}'", + extnamestr, + token::get_ident(it.ident))); return SmallVector::zero(); } fld.cx.bt_push(ExpnInfo { @@ -640,7 +639,7 @@ pub fn expand_item_mac(it: P, if it.ident.name == parse::token::special_idents::invalid.name { fld.cx.span_err(path_span, &format!("macro {}! expects an ident argument", - &extnamestr)[]); + &extnamestr)); return SmallVector::zero(); } fld.cx.bt_push(ExpnInfo { @@ -659,7 +658,7 @@ pub fn expand_item_mac(it: P, if it.ident.name == parse::token::special_idents::invalid.name { fld.cx.span_err(path_span, &format!("macro_rules! expects an ident argument") - []); + ); return SmallVector::zero(); } fld.cx.bt_push(ExpnInfo { @@ -691,7 +690,7 @@ pub fn expand_item_mac(it: P, _ => { fld.cx.span_err(it.span, &format!("{}! is not legal in item position", - &extnamestr)[]); + &extnamestr)); return SmallVector::zero(); } } @@ -710,7 +709,7 @@ pub fn expand_item_mac(it: P, None => { fld.cx.span_err(path_span, &format!("non-item macro in item position: {}", - &extnamestr)[]); + &extnamestr)); return SmallVector::zero(); } }; @@ -954,7 +953,7 @@ fn expand_pat(p: P, fld: &mut MacroExpander) -> P { None => { fld.cx.span_err(pth.span, &format!("macro undefined: '{}!'", - extnamestr)[]); + extnamestr)); // let compilation continue return DummyResult::raw_pat(span); } @@ -983,7 +982,7 @@ fn expand_pat(p: P, fld: &mut MacroExpander) -> P { &format!( "non-pattern macro in pattern position: {}", &extnamestr - )[] + ) ); return DummyResult::raw_pat(span); } @@ -995,7 +994,7 @@ fn expand_pat(p: P, fld: &mut MacroExpander) -> P { _ => { fld.cx.span_err(span, &format!("{}! is not legal in pattern position", - &extnamestr)[]); + &extnamestr)); return DummyResult::raw_pat(span); } } @@ -1981,7 +1980,7 @@ foo_module!(); // the xx binding should bind all of the xx varrefs: for (idx,v) in varrefs.iter().filter(|p| { p.segments.len() == 1 - && "xx" == &token::get_ident(p.segments[0].identifier)[] + && "xx" == &*token::get_ident(p.segments[0].identifier) }).enumerate() { if mtwt::resolve(v.segments[0].identifier) != resolved_binding { println!("uh oh, xx binding didn't match xx varref:"); diff --git a/src/libsyntax/ext/format.rs b/src/libsyntax/ext/format.rs index e17329d7d3300..1c2374e31f130 100644 --- a/src/libsyntax/ext/format.rs +++ b/src/libsyntax/ext/format.rs @@ -113,7 +113,7 @@ fn parse_args(ecx: &mut ExtCtxt, sp: Span, tts: &[ast::TokenTree]) _ => { ecx.span_err(p.span, &format!("expected ident for named argument, found `{}`", - p.this_token_to_string())[]); + p.this_token_to_string())); return None; } }; @@ -127,7 +127,7 @@ fn parse_args(ecx: &mut ExtCtxt, sp: Span, tts: &[ast::TokenTree]) Some(prev) => { ecx.span_err(e.span, &format!("duplicate argument named `{}`", - name)[]); + name)); ecx.parse_sess.span_diagnostic.span_note(prev.span, "previously here"); continue } @@ -281,19 +281,19 @@ impl<'a, 'b> Context<'a, 'b> { &format!("argument redeclared with type `{}` when \ it was previously `{}`", *ty, - *cur)[]); + *cur)); } (&Known(ref cur), _) => { self.ecx.span_err(sp, &format!("argument used to format with `{}` was \ attempted to not be used for formatting", - *cur)[]); + *cur)); } (_, &Known(ref ty)) => { self.ecx.span_err(sp, &format!("argument previously used as a format \ argument attempted to be used as `{}`", - *ty)[]); + *ty)); } (_, _) => { self.ecx.span_err(sp, "argument declared with multiple formats"); @@ -337,7 +337,7 @@ impl<'a, 'b> Context<'a, 'b> { /// Translate the accumulated string literals to a literal expression fn trans_literal_string(&mut self) -> P { let sp = self.fmtsp; - let s = token::intern_and_get_ident(&self.literal[]); + let s = token::intern_and_get_ident(&self.literal); self.literal.clear(); self.ecx.expr_str(sp, s) } @@ -494,7 +494,7 @@ impl<'a, 'b> Context<'a, 'b> { None => continue // error already generated }; - let name = self.ecx.ident_of(&format!("__arg{}", i)[]); + let name = self.ecx.ident_of(&format!("__arg{}", i)); pats.push(self.ecx.pat_ident(e.span, name)); locals.push(Context::format_arg(self.ecx, e.span, arg_ty, self.ecx.expr_ident(e.span, name))); @@ -511,7 +511,7 @@ impl<'a, 'b> Context<'a, 'b> { }; let lname = self.ecx.ident_of(&format!("__arg{}", - *name)[]); + *name)); pats.push(self.ecx.pat_ident(e.span, lname)); names[self.name_positions[*name]] = Some(Context::format_arg(self.ecx, e.span, arg_ty, @@ -600,7 +600,7 @@ impl<'a, 'b> Context<'a, 'b> { _ => { ecx.span_err(sp, &format!("unknown format trait `{}`", - *tyname)[]); + *tyname)); "Dummy" } } @@ -694,7 +694,7 @@ pub fn expand_preparsed_format_args(ecx: &mut ExtCtxt, sp: Span, } if !parser.errors.is_empty() { cx.ecx.span_err(cx.fmtsp, &format!("invalid format string: {}", - parser.errors.remove(0))[]); + parser.errors.remove(0))); return DummyResult::raw_expr(sp); } if !cx.literal.is_empty() { diff --git a/src/libsyntax/ext/quote.rs b/src/libsyntax/ext/quote.rs index 2c7bf713aad85..554529b5cb23c 100644 --- a/src/libsyntax/ext/quote.rs +++ b/src/libsyntax/ext/quote.rs @@ -466,7 +466,7 @@ pub fn expand_quote_stmt(cx: &mut ExtCtxt, } fn ids_ext(strs: Vec ) -> Vec { - strs.iter().map(|str| str_to_ident(&(*str)[])).collect() + strs.iter().map(|str| str_to_ident(&(*str))).collect() } fn id_ext(str: &str) -> ast::Ident { diff --git a/src/libsyntax/ext/source_util.rs b/src/libsyntax/ext/source_util.rs index c8d48750c7509..ac82effeaeacd 100644 --- a/src/libsyntax/ext/source_util.rs +++ b/src/libsyntax/ext/source_util.rs @@ -57,7 +57,7 @@ pub fn expand_file(cx: &mut ExtCtxt, sp: Span, tts: &[ast::TokenTree]) let topmost = cx.original_span_in_file(); let loc = cx.codemap().lookup_char_pos(topmost.lo); - let filename = token::intern_and_get_ident(&loc.file.name[]); + let filename = token::intern_and_get_ident(&loc.file.name); base::MacExpr::new(cx.expr_str(topmost, filename)) } diff --git a/src/libsyntax/ext/tt/macro_parser.rs b/src/libsyntax/ext/tt/macro_parser.rs index 664f7b3e08848..ce513bc91f5a9 100644 --- a/src/libsyntax/ext/tt/macro_parser.rs +++ b/src/libsyntax/ext/tt/macro_parser.rs @@ -153,7 +153,7 @@ pub fn count_names(ms: &[TokenTree]) -> usize { seq.num_captures } &TtDelimited(_, ref delim) => { - count_names(&delim.tts[]) + count_names(&delim.tts) } &TtToken(_, MatchNt(..)) => { 1 diff --git a/src/libsyntax/parse/attr.rs b/src/libsyntax/parse/attr.rs index 06e8728d23672..a0e2b4dbf5a70 100644 --- a/src/libsyntax/parse/attr.rs +++ b/src/libsyntax/parse/attr.rs @@ -94,7 +94,7 @@ impl<'a> ParserAttr for Parser<'a> { } _ => { let token_str = self.this_token_to_string(); - self.fatal(&format!("expected `#`, found `{}`", token_str)[]); + self.fatal(&format!("expected `#`, found `{}`", token_str)); } }; diff --git a/src/libsyntax/parse/lexer/mod.rs b/src/libsyntax/parse/lexer/mod.rs index fd08cbd161bfe..83d2bb0cc70a9 100644 --- a/src/libsyntax/parse/lexer/mod.rs +++ b/src/libsyntax/parse/lexer/mod.rs @@ -1109,7 +1109,7 @@ impl<'a> StringReader<'a> { // expansion purposes. See #12512 for the gory details of why // this is necessary. let ident = self.with_str_from(start, |lifetime_name| { - str_to_ident(&format!("'{}", lifetime_name)[]) + str_to_ident(&format!("'{}", lifetime_name)) }); // Conjure up a "keyword checking ident" to make sure that diff --git a/src/libsyntax/parse/mod.rs b/src/libsyntax/parse/mod.rs index 7ed48bdbb928d..43dfcbae57e49 100644 --- a/src/libsyntax/parse/mod.rs +++ b/src/libsyntax/parse/mod.rs @@ -254,7 +254,7 @@ pub fn file_to_filemap(sess: &ParseSess, path: &Path, spanopt: Option) Ok(bytes) => bytes, Err(e) => { err(&format!("couldn't read {:?}: {}", - path.display(), e)[]); + path.display(), e)); unreachable!() } }; @@ -264,7 +264,7 @@ pub fn file_to_filemap(sess: &ParseSess, path: &Path, spanopt: Option) path.as_str().unwrap().to_string()) } None => { - err(&format!("{:?} is not UTF-8 encoded", path.display())[]) + err(&format!("{:?} is not UTF-8 encoded", path.display())) } } unreachable!() @@ -827,19 +827,19 @@ mod test { ast::TtDelimited(_, ref macro_delimed)] if name_macro_rules.as_str() == "macro_rules" && name_zip.as_str() == "zip" => { - match ¯o_delimed.tts[] { + match ¯o_delimed.tts[..] { [ast::TtDelimited(_, ref first_delimed), ast::TtToken(_, token::FatArrow), ast::TtDelimited(_, ref second_delimed)] if macro_delimed.delim == token::Paren => { - match &first_delimed.tts[] { + match &first_delimed.tts[..] { [ast::TtToken(_, token::Dollar), ast::TtToken(_, token::Ident(name, token::Plain))] if first_delimed.delim == token::Paren && name.as_str() == "a" => {}, _ => panic!("value 3: {:?}", **first_delimed), } - match &second_delimed.tts[] { + match &second_delimed.tts[..] { [ast::TtToken(_, token::Dollar), ast::TtToken(_, token::Ident(name, token::Plain))] if second_delimed.delim == token::Paren @@ -1207,7 +1207,7 @@ mod test { let source = "/// doc comment\r\n/// line 2\r\nfn foo() {}".to_string(); let item = parse_item_from_source_str(name.clone(), source, Vec::new(), &sess).unwrap(); - let docs = item.attrs.iter().filter(|a| &a.name()[] == "doc") + let docs = item.attrs.iter().filter(|a| &*a.name() == "doc") .map(|a| a.value_str().unwrap().to_string()).collect::>(); let b: &[_] = &["/// doc comment".to_string(), "/// line 2".to_string()]; assert_eq!(&docs[..], b); diff --git a/src/libsyntax/parse/obsolete.rs b/src/libsyntax/parse/obsolete.rs index 8480772ce6c1a..e6bcb8ac74567 100644 --- a/src/libsyntax/parse/obsolete.rs +++ b/src/libsyntax/parse/obsolete.rs @@ -106,16 +106,16 @@ impl<'a> ParserObsoleteMethods for parser::Parser<'a> { desc: &str, error: bool) { if error { - self.span_err(sp, &format!("obsolete syntax: {}", kind_str)[]); + self.span_err(sp, &format!("obsolete syntax: {}", kind_str)); } else { - self.span_warn(sp, &format!("obsolete syntax: {}", kind_str)[]); + self.span_warn(sp, &format!("obsolete syntax: {}", kind_str)); } if !self.obsolete_set.contains(&kind) { self.sess .span_diagnostic .handler() - .note(&format!("{}", desc)[]); + .note(&format!("{}", desc)); self.obsolete_set.insert(kind); } } diff --git a/src/libsyntax/parse/parser.rs b/src/libsyntax/parse/parser.rs index 370201e53825e..88c349371592c 100644 --- a/src/libsyntax/parse/parser.rs +++ b/src/libsyntax/parse/parser.rs @@ -2562,7 +2562,8 @@ impl<'a> Parser<'a> { let index = self.mk_index(e, ix); e = self.mk_expr(lo, hi, index); - self.obsolete(span, ObsoleteSyntax::EmptyIndex); + let obsolete_span = mk_sp(bracket_pos, hi); + self.obsolete(obsolete_span, ObsoleteSyntax::EmptyIndex); } else { let ix = self.parse_expr(); hi = self.span.hi; @@ -5190,7 +5191,7 @@ impl<'a> Parser<'a> { -> (ast::Item_, Vec ) { let mut prefix = Path::new(self.sess.span_diagnostic.cm.span_to_filename(self.span)); prefix.pop(); - let mod_path = Path::new(".").join_many(&self.mod_path_stack[]); + let mod_path = Path::new(".").join_many(&self.mod_path_stack); let dir_path = prefix.join(&mod_path); let mod_string = token::get_ident(id); let (file_path, owns_directory) = match ::attr::first_attr_value_str_by_name( diff --git a/src/libsyntax/parse/token.rs b/src/libsyntax/parse/token.rs index 433c013591c2d..2797ef084d9ca 100644 --- a/src/libsyntax/parse/token.rs +++ b/src/libsyntax/parse/token.rs @@ -652,47 +652,47 @@ impl BytesContainer for InternedString { impl fmt::Debug for InternedString { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { - fmt::Debug::fmt(&self.string[], f) + fmt::Debug::fmt(&self.string, f) } } impl fmt::Display for InternedString { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { - fmt::Display::fmt(&self.string[], f) + fmt::Display::fmt(&self.string, f) } } impl<'a> PartialEq<&'a str> for InternedString { #[inline(always)] fn eq(&self, other: & &'a str) -> bool { - PartialEq::eq(&self.string[], *other) + PartialEq::eq(&self.string[..], *other) } #[inline(always)] fn ne(&self, other: & &'a str) -> bool { - PartialEq::ne(&self.string[], *other) + PartialEq::ne(&self.string[..], *other) } } impl<'a> PartialEq for &'a str { #[inline(always)] fn eq(&self, other: &InternedString) -> bool { - PartialEq::eq(*self, &other.string[]) + PartialEq::eq(*self, &other.string[..]) } #[inline(always)] fn ne(&self, other: &InternedString) -> bool { - PartialEq::ne(*self, &other.string[]) + PartialEq::ne(*self, &other.string[..]) } } impl Decodable for InternedString { fn decode(d: &mut D) -> Result { - Ok(get_name(get_ident_interner().intern(&try!(d.read_str())[]))) + Ok(get_name(get_ident_interner().intern(&try!(d.read_str())[..]))) } } impl Encodable for InternedString { fn encode(&self, s: &mut S) -> Result<(), S::Error> { - s.emit_str(&self.string[]) + s.emit_str(&self.string) } } diff --git a/src/libsyntax/print/pp.rs b/src/libsyntax/print/pp.rs index 1593bfb97fe1d..5b3fde8535b3d 100644 --- a/src/libsyntax/print/pp.rs +++ b/src/libsyntax/print/pp.rs @@ -139,7 +139,7 @@ pub fn buf_str(toks: &[Token], } s.push_str(&format!("{}={}", szs[i], - tok_str(&toks[i]))[]); + tok_str(&toks[i]))); i += 1; i %= n; } diff --git a/src/libsyntax/print/pprust.rs b/src/libsyntax/print/pprust.rs index f26578e740120..92e7f4d287091 100644 --- a/src/libsyntax/print/pprust.rs +++ b/src/libsyntax/print/pprust.rs @@ -2342,7 +2342,7 @@ impl<'a> State<'a> { // HACK(eddyb) ignore the separately printed self argument. let args = if first { - &decl.inputs[] + &decl.inputs[..] } else { &decl.inputs[1..] }; diff --git a/src/libsyntax/ptr.rs b/src/libsyntax/ptr.rs index adb5383a8fd54..ca3a1848c3a61 100644 --- a/src/libsyntax/ptr.rs +++ b/src/libsyntax/ptr.rs @@ -111,13 +111,6 @@ impl Display for P { } } -#[cfg(stage0)] -impl> Hash for P { - fn hash(&self, state: &mut S) { - (**self).hash(state); - } -} -#[cfg(not(stage0))] impl Hash for P { fn hash(&self, state: &mut H) { (**self).hash(state); diff --git a/src/libsyntax/std_inject.rs b/src/libsyntax/std_inject.rs index 4e4a571ede7b8..ac7cdb1b41307 100644 --- a/src/libsyntax/std_inject.rs +++ b/src/libsyntax/std_inject.rs @@ -38,7 +38,7 @@ pub fn maybe_inject_prelude(krate: ast::Crate) -> ast::Crate { } pub fn use_std(krate: &ast::Crate) -> bool { - !attr::contains_name(&krate.attrs[], "no_std") + !attr::contains_name(&krate.attrs, "no_std") } fn no_prelude(attrs: &[ast::Attribute]) -> bool { @@ -88,14 +88,14 @@ impl fold::Folder for PreludeInjector { // only add `use std::prelude::*;` if there wasn't a // `#![no_implicit_prelude]` at the crate level. // fold_mod() will insert glob path. - if !no_prelude(&krate.attrs[]) { + if !no_prelude(&krate.attrs) { krate.module = self.fold_mod(krate.module); } krate } fn fold_item(&mut self, item: P) -> SmallVector> { - if !no_prelude(&item.attrs[]) { + if !no_prelude(&item.attrs) { // only recur if there wasn't `#![no_implicit_prelude]` // on this item, i.e. this means that the prelude is not // implicitly imported though the whole subtree diff --git a/src/libsyntax/test.rs b/src/libsyntax/test.rs index 7b1fc91e45b5b..5bada41badfd8 100644 --- a/src/libsyntax/test.rs +++ b/src/libsyntax/test.rs @@ -73,14 +73,14 @@ pub fn modify_for_testing(sess: &ParseSess, // We generate the test harness when building in the 'test' // configuration, either with the '--test' or '--cfg test' // command line options. - let should_test = attr::contains_name(&krate.config[], "test"); + let should_test = attr::contains_name(&krate.config, "test"); // Check for #[reexport_test_harness_main = "some_name"] which // creates a `use some_name = __test::main;`. This needs to be // unconditional, so that the attribute is still marked as used in // non-test builds. let reexport_test_harness_main = - attr::first_attr_value_str_by_name(&krate.attrs[], + attr::first_attr_value_str_by_name(&krate.attrs, "reexport_test_harness_main"); if should_test { @@ -306,7 +306,7 @@ enum HasTestSignature { fn is_test_fn(cx: &TestCtxt, i: &ast::Item) -> bool { - let has_test_attr = attr::contains_name(&i.attrs[], "test"); + let has_test_attr = attr::contains_name(&i.attrs, "test"); fn has_test_signature(i: &ast::Item) -> HasTestSignature { match &i.node { @@ -342,7 +342,7 @@ fn is_test_fn(cx: &TestCtxt, i: &ast::Item) -> bool { } fn is_bench_fn(cx: &TestCtxt, i: &ast::Item) -> bool { - let has_bench_attr = attr::contains_name(&i.attrs[], "bench"); + let has_bench_attr = attr::contains_name(&i.attrs, "bench"); fn has_test_signature(i: &ast::Item) -> bool { match i.node { @@ -562,7 +562,7 @@ fn mk_tests(cx: &TestCtxt) -> P { } fn is_test_crate(krate: &ast::Crate) -> bool { - match attr::find_crate_name(&krate.attrs[]) { + match attr::find_crate_name(&krate.attrs) { Some(ref s) if "test" == &s[..] => true, _ => false } diff --git a/src/libsyntax/util/interner.rs b/src/libsyntax/util/interner.rs index dffeac6f3f793..5be45a2698f40 100644 --- a/src/libsyntax/util/interner.rs +++ b/src/libsyntax/util/interner.rs @@ -18,7 +18,6 @@ use std::borrow::Borrow; use std::cell::RefCell; use std::cmp::Ordering; use std::collections::HashMap; -#[cfg(stage0)] use std::collections::hash_map::Hasher; use std::fmt; use std::hash::Hash; use std::ops::Deref; @@ -30,71 +29,6 @@ pub struct Interner { } // when traits can extend traits, we should extend index to get [] -#[cfg(stage0)] -impl + Clone + 'static> Interner { - pub fn new() -> Interner { - Interner { - map: RefCell::new(HashMap::new()), - vect: RefCell::new(Vec::new()), - } - } - - pub fn prefill(init: &[T]) -> Interner { - let rv = Interner::new(); - for v in init { - rv.intern((*v).clone()); - } - rv - } - - pub fn intern(&self, val: T) -> Name { - let mut map = self.map.borrow_mut(); - match (*map).get(&val) { - Some(&idx) => return idx, - None => (), - } - - let mut vect = self.vect.borrow_mut(); - let new_idx = Name((*vect).len() as u32); - (*map).insert(val.clone(), new_idx); - (*vect).push(val); - new_idx - } - - pub fn gensym(&self, val: T) -> Name { - let mut vect = self.vect.borrow_mut(); - let new_idx = Name((*vect).len() as u32); - // leave out of .map to avoid colliding - (*vect).push(val); - new_idx - } - - pub fn get(&self, idx: Name) -> T { - let vect = self.vect.borrow(); - (*vect)[idx.usize()].clone() - } - - pub fn len(&self) -> usize { - let vect = self.vect.borrow(); - (*vect).len() - } - - pub fn find(&self, val: &Q) -> Option - where T: Borrow, Q: Eq + Hash { - let map = self.map.borrow(); - match (*map).get(val) { - Some(v) => Some(*v), - None => None, - } - } - - pub fn clear(&self) { - *self.map.borrow_mut() = HashMap::new(); - *self.vect.borrow_mut() = Vec::new(); - } -} -// when traits can extend traits, we should extend index to get [] -#[cfg(not(stage0))] impl Interner { pub fn new() -> Interner { Interner { @@ -275,15 +209,6 @@ impl StrInterner { self.vect.borrow().len() } - #[cfg(stage0)] - pub fn find(&self, val: &Q) -> Option - where RcStr: Borrow, Q: Eq + Hash { - match (*self.map.borrow()).get(val) { - Some(v) => Some(*v), - None => None, - } - } - #[cfg(not(stage0))] pub fn find(&self, val: &Q) -> Option where RcStr: Borrow, Q: Eq + Hash { match (*self.map.borrow()).get(val) { diff --git a/src/snapshots.txt b/src/snapshots.txt index 4759c44259d52..46a942b6eeb86 100644 --- a/src/snapshots.txt +++ b/src/snapshots.txt @@ -1,3 +1,12 @@ +S 2015-02-19 522d09d + freebsd-x86_64 7ea14ef85a25bca70a310a2cd660b356cf61abc7 + linux-i386 26e3caa1ce1c482b9941a6bdc64b3e65d036c200 + linux-x86_64 44f514aabb4e4049e4db9a4e1fdeb16f6cee60f2 + macos-i386 157910592224083df56f5f31ced3e6f3dc9b1de0 + macos-x86_64 56c28aa0e14ec6991ad6ca213568f1155561105d + winnt-i386 da0f7a3fbc913fbb177917f2850bb41501affb5c + winnt-x86_64 22bd816ccd2690fc9804b27ca525f603be8aeaa5 + S 2015-02-17 f1bb6c2 freebsd-x86_64 59f3a2c6350c170804fb65838e1b504eeab89105 linux-i386 191ed5ec4f17e32d36abeade55a1c6085e51245c diff --git a/src/test/compile-fail/if-loop.rs b/src/test/compile-fail/if-loop.rs new file mode 100644 index 0000000000000..15f04df693981 --- /dev/null +++ b/src/test/compile-fail/if-loop.rs @@ -0,0 +1,20 @@ +// Copyright 2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![feature(rustc_attrs)] +#![allow(warnings)] + +// This used to ICE because the "if" being unreachable was not handled correctly +fn err() { + if loop {} {} +} + +#[rustc_error] +fn main() {} //~ ERROR compilation successful diff --git a/src/test/compile-fail/lint-unsafe-block.rs b/src/test/compile-fail/lint-unsafe-block.rs deleted file mode 100644 index 56d2b2cd6c084..0000000000000 --- a/src/test/compile-fail/lint-unsafe-block.rs +++ /dev/null @@ -1,28 +0,0 @@ -// Copyright 2013 The Rust Project Developers. See the COPYRIGHT -// file at the top-level directory of this distribution and at -// http://rust-lang.org/COPYRIGHT. -// -// Licensed under the Apache License, Version 2.0 or the MIT license -// , at your -// option. This file may not be copied, modified, or distributed -// except according to those terms. - -#![allow(unused_unsafe)] -#![allow(dead_code)] -#![deny(unsafe_blocks)] -unsafe fn allowed() {} - -#[allow(unsafe_blocks)] fn also_allowed() { unsafe {} } - -macro_rules! unsafe_in_macro { - () => { - unsafe {} //~ ERROR: usage of an `unsafe` block - } -} - -fn main() { - unsafe {} //~ ERROR: usage of an `unsafe` block - - unsafe_in_macro!() -} diff --git a/src/test/compile-fail/lint-unsafe-code.rs b/src/test/compile-fail/lint-unsafe-code.rs new file mode 100644 index 0000000000000..7b17d8877572f --- /dev/null +++ b/src/test/compile-fail/lint-unsafe-code.rs @@ -0,0 +1,53 @@ +// Copyright 2013-2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![allow(unused_unsafe)] +#![allow(dead_code)] +#![deny(unsafe_code)] + +use std::marker::PhantomFn; + +struct Bar; + +#[allow(unsafe_code)] +mod allowed_unsafe { + use std::marker::PhantomFn; + fn allowed() { unsafe {} } + unsafe fn also_allowed() {} + unsafe trait AllowedUnsafe : PhantomFn {} + unsafe impl AllowedUnsafe for super::Bar {} +} + +macro_rules! unsafe_in_macro { + () => { + unsafe {} //~ ERROR: usage of an `unsafe` block + } +} + +unsafe fn baz() {} //~ ERROR: declaration of an `unsafe` function +unsafe trait Foo : PhantomFn {} //~ ERROR: declaration of an `unsafe` trait +unsafe impl Foo for Bar {} //~ ERROR: implementation of an `unsafe` trait + +trait Baz { + unsafe fn baz(&self); //~ ERROR: declaration of an `unsafe` method + unsafe fn provided(&self) {} //~ ERROR: implementation of an `unsafe` method + unsafe fn provided_override(&self) {} //~ ERROR: implementation of an `unsafe` method +} + +impl Baz for Bar { + unsafe fn baz(&self) {} //~ ERROR: implementation of an `unsafe` method + unsafe fn provided_override(&self) {} //~ ERROR: implementation of an `unsafe` method +} + +fn main() { + unsafe {} //~ ERROR: usage of an `unsafe` block + + unsafe_in_macro!() +} diff --git a/src/test/compile-fail/suggest-private-fields.rs b/src/test/compile-fail/suggest-private-fields.rs new file mode 100644 index 0000000000000..30648498ba885 --- /dev/null +++ b/src/test/compile-fail/suggest-private-fields.rs @@ -0,0 +1,36 @@ +// Copyright 2014 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// aux-build:struct-field-privacy.rs + +extern crate "struct-field-privacy" as xc; + +use xc::B; + +struct A { + pub a: u32, + b: u32, +} + +fn main () { + // external crate struct + let k = B { + aa: 20, //~ ERROR structure `struct-field-privacy::B` has no field named `aa` + //~^ HELP did you mean `a`? + bb: 20, //~ ERROR structure `struct-field-privacy::B` has no field named `bb` + }; + // local crate struct + let l = A { + aa: 20, //~ ERROR structure `A` has no field named `aa` + //~^ HELP did you mean `a`? + bb: 20, //~ ERROR structure `A` has no field named `bb` + //~^ HELP did you mean `b`? + }; +} diff --git a/src/test/run-pass/issue-22356.rs b/src/test/run-pass/issue-22356.rs new file mode 100644 index 0000000000000..7c0ab11bc4462 --- /dev/null +++ b/src/test/run-pass/issue-22356.rs @@ -0,0 +1,39 @@ +// Copyright 2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 or the MIT license +// , at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +use std::marker::{PhantomData, PhantomFn}; + +pub struct Handle(T, I); + +impl Handle { + pub fn get_info(&self) -> &I { + let Handle(_, ref info) = *self; + info + } +} + +pub struct BufferHandle { + raw: RawBufferHandle, + _marker: PhantomData, +} + +impl BufferHandle { + pub fn get_info(&self) -> &String { + self.raw.get_info() + } +} + +pub type RawBufferHandle = Handle<::Buffer, String>; + +pub trait Device: PhantomFn { + type Buffer; +} + +fn main() {}